00:00:00.000 Started by upstream project "autotest-per-patch" build number 132293 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.101 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.101 The recommended git tool is: git 00:00:00.102 using credential 00000000-0000-0000-0000-000000000002 00:00:00.104 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.162 Fetching changes from the remote Git repository 00:00:00.165 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.217 Using shallow fetch with depth 1 00:00:00.217 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.217 > git --version # timeout=10 00:00:00.264 > git --version # 'git version 2.39.2' 00:00:00.264 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.289 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.289 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.398 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.410 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.423 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:07.423 > git config core.sparsecheckout # timeout=10 00:00:07.436 > git read-tree -mu HEAD # timeout=10 00:00:07.452 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:07.470 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:07.470 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:07.596 [Pipeline] Start of Pipeline 00:00:07.609 [Pipeline] library 00:00:07.610 Loading library shm_lib@master 00:00:07.610 Library shm_lib@master is cached. Copying from home. 00:00:07.623 [Pipeline] node 00:00:07.636 Running on VM-host-SM17 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:07.637 [Pipeline] { 00:00:07.644 [Pipeline] catchError 00:00:07.645 [Pipeline] { 00:00:07.655 [Pipeline] wrap 00:00:07.662 [Pipeline] { 00:00:07.669 [Pipeline] stage 00:00:07.670 [Pipeline] { (Prologue) 00:00:07.685 [Pipeline] echo 00:00:07.687 Node: VM-host-SM17 00:00:07.691 [Pipeline] cleanWs 00:00:07.699 [WS-CLEANUP] Deleting project workspace... 00:00:07.699 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.704 [WS-CLEANUP] done 00:00:07.878 [Pipeline] setCustomBuildProperty 00:00:07.958 [Pipeline] httpRequest 00:00:08.587 [Pipeline] echo 00:00:08.590 Sorcerer 10.211.164.101 is alive 00:00:08.600 [Pipeline] retry 00:00:08.602 [Pipeline] { 00:00:08.616 [Pipeline] httpRequest 00:00:08.620 HttpMethod: GET 00:00:08.621 URL: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:08.622 Sending request to url: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:08.639 Response Code: HTTP/1.1 200 OK 00:00:08.640 Success: Status code 200 is in the accepted range: 200,404 00:00:08.641 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:09.548 [Pipeline] } 00:00:09.566 [Pipeline] // retry 00:00:09.574 [Pipeline] sh 00:00:09.854 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:09.871 [Pipeline] httpRequest 00:00:11.240 [Pipeline] echo 00:00:11.242 Sorcerer 10.211.164.101 is alive 00:00:11.252 [Pipeline] retry 00:00:11.254 [Pipeline] { 00:00:11.268 [Pipeline] httpRequest 00:00:11.272 HttpMethod: GET 00:00:11.273 URL: http://10.211.164.101/packages/spdk_4b2d483c63162e17641f75a0719927be08118be9.tar.gz 00:00:11.273 Sending request to url: http://10.211.164.101/packages/spdk_4b2d483c63162e17641f75a0719927be08118be9.tar.gz 00:00:11.287 Response Code: HTTP/1.1 200 OK 00:00:11.287 Success: Status code 200 is in the accepted range: 200,404 00:00:11.288 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_4b2d483c63162e17641f75a0719927be08118be9.tar.gz 00:00:40.478 [Pipeline] } 00:00:40.492 [Pipeline] // retry 00:00:40.500 [Pipeline] sh 00:00:40.779 + tar --no-same-owner -xf spdk_4b2d483c63162e17641f75a0719927be08118be9.tar.gz 00:00:43.320 [Pipeline] sh 00:00:43.599 + git -C spdk log --oneline -n5 00:00:43.599 4b2d483c6 dif: Add spdk_dif_pi_format_get_pi_size() to use for NVMe PRACT 00:00:43.599 560a1dde3 bdev/malloc: Support accel sequence when DIF is enabled 00:00:43.599 30279d1cf bdev: Add spdk_bdev_io_has_no_metadata() for bdev modules 00:00:43.599 4bd31eb0a bdev/malloc: Extract internal of verify_pi() for code reuse 00:00:43.599 2093c51b3 bdev/malloc: malloc_done() uses switch-case for clean up 00:00:43.618 [Pipeline] writeFile 00:00:43.648 [Pipeline] sh 00:00:43.928 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:43.940 [Pipeline] sh 00:00:44.219 + cat autorun-spdk.conf 00:00:44.219 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:44.219 SPDK_TEST_NVMF=1 00:00:44.219 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:44.219 SPDK_TEST_URING=1 00:00:44.219 SPDK_TEST_USDT=1 00:00:44.219 SPDK_RUN_UBSAN=1 00:00:44.219 NET_TYPE=virt 00:00:44.219 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:44.224 RUN_NIGHTLY=0 00:00:44.226 [Pipeline] } 00:00:44.239 [Pipeline] // stage 00:00:44.253 [Pipeline] stage 00:00:44.255 [Pipeline] { (Run VM) 00:00:44.267 [Pipeline] sh 00:00:44.547 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:44.547 + echo 'Start stage prepare_nvme.sh' 00:00:44.547 Start stage prepare_nvme.sh 00:00:44.547 + [[ -n 5 ]] 00:00:44.547 + disk_prefix=ex5 00:00:44.547 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:00:44.547 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:00:44.547 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:00:44.547 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:44.547 ++ SPDK_TEST_NVMF=1 00:00:44.547 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:44.547 ++ SPDK_TEST_URING=1 00:00:44.547 ++ SPDK_TEST_USDT=1 00:00:44.547 ++ SPDK_RUN_UBSAN=1 00:00:44.547 ++ NET_TYPE=virt 00:00:44.547 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:44.547 ++ RUN_NIGHTLY=0 00:00:44.547 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:44.547 + nvme_files=() 00:00:44.547 + declare -A nvme_files 00:00:44.547 + backend_dir=/var/lib/libvirt/images/backends 00:00:44.547 + nvme_files['nvme.img']=5G 00:00:44.547 + nvme_files['nvme-cmb.img']=5G 00:00:44.547 + nvme_files['nvme-multi0.img']=4G 00:00:44.547 + nvme_files['nvme-multi1.img']=4G 00:00:44.547 + nvme_files['nvme-multi2.img']=4G 00:00:44.547 + nvme_files['nvme-openstack.img']=8G 00:00:44.547 + nvme_files['nvme-zns.img']=5G 00:00:44.547 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:44.547 + (( SPDK_TEST_FTL == 1 )) 00:00:44.547 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:44.547 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:44.547 + for nvme in "${!nvme_files[@]}" 00:00:44.547 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:00:44.547 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:44.547 + for nvme in "${!nvme_files[@]}" 00:00:44.547 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:00:44.547 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:44.547 + for nvme in "${!nvme_files[@]}" 00:00:44.547 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:00:44.547 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:44.547 + for nvme in "${!nvme_files[@]}" 00:00:44.547 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:00:44.547 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:44.547 + for nvme in "${!nvme_files[@]}" 00:00:44.547 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:00:44.547 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:44.547 + for nvme in "${!nvme_files[@]}" 00:00:44.547 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:00:44.547 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:44.547 + for nvme in "${!nvme_files[@]}" 00:00:44.547 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:00:44.805 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:44.805 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:00:45.063 + echo 'End stage prepare_nvme.sh' 00:00:45.063 End stage prepare_nvme.sh 00:00:45.073 [Pipeline] sh 00:00:45.352 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:45.352 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -H -a -v -f fedora39 00:00:45.352 00:00:45.352 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:00:45.352 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:00:45.352 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:45.352 HELP=0 00:00:45.352 DRY_RUN=0 00:00:45.352 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img, 00:00:45.352 NVME_DISKS_TYPE=nvme,nvme, 00:00:45.352 NVME_AUTO_CREATE=0 00:00:45.352 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img, 00:00:45.352 NVME_CMB=,, 00:00:45.352 NVME_PMR=,, 00:00:45.352 NVME_ZNS=,, 00:00:45.352 NVME_MS=,, 00:00:45.352 NVME_FDP=,, 00:00:45.352 SPDK_VAGRANT_DISTRO=fedora39 00:00:45.352 SPDK_VAGRANT_VMCPU=10 00:00:45.352 SPDK_VAGRANT_VMRAM=12288 00:00:45.352 SPDK_VAGRANT_PROVIDER=libvirt 00:00:45.352 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:45.352 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:45.352 SPDK_OPENSTACK_NETWORK=0 00:00:45.352 VAGRANT_PACKAGE_BOX=0 00:00:45.352 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:45.352 FORCE_DISTRO=true 00:00:45.352 VAGRANT_BOX_VERSION= 00:00:45.352 EXTRA_VAGRANTFILES= 00:00:45.352 NIC_MODEL=e1000 00:00:45.352 00:00:45.352 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:00:45.352 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:48.638 Bringing machine 'default' up with 'libvirt' provider... 00:00:48.896 ==> default: Creating image (snapshot of base box volume). 00:00:49.155 ==> default: Creating domain with the following settings... 00:00:49.155 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1731665929_482124b4e79b0a12fb45 00:00:49.155 ==> default: -- Domain type: kvm 00:00:49.155 ==> default: -- Cpus: 10 00:00:49.155 ==> default: -- Feature: acpi 00:00:49.155 ==> default: -- Feature: apic 00:00:49.155 ==> default: -- Feature: pae 00:00:49.155 ==> default: -- Memory: 12288M 00:00:49.155 ==> default: -- Memory Backing: hugepages: 00:00:49.155 ==> default: -- Management MAC: 00:00:49.155 ==> default: -- Loader: 00:00:49.155 ==> default: -- Nvram: 00:00:49.155 ==> default: -- Base box: spdk/fedora39 00:00:49.155 ==> default: -- Storage pool: default 00:00:49.155 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1731665929_482124b4e79b0a12fb45.img (20G) 00:00:49.155 ==> default: -- Volume Cache: default 00:00:49.155 ==> default: -- Kernel: 00:00:49.155 ==> default: -- Initrd: 00:00:49.155 ==> default: -- Graphics Type: vnc 00:00:49.155 ==> default: -- Graphics Port: -1 00:00:49.155 ==> default: -- Graphics IP: 127.0.0.1 00:00:49.155 ==> default: -- Graphics Password: Not defined 00:00:49.155 ==> default: -- Video Type: cirrus 00:00:49.155 ==> default: -- Video VRAM: 9216 00:00:49.155 ==> default: -- Sound Type: 00:00:49.155 ==> default: -- Keymap: en-us 00:00:49.155 ==> default: -- TPM Path: 00:00:49.155 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:49.155 ==> default: -- Command line args: 00:00:49.155 ==> default: -> value=-device, 00:00:49.155 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:49.155 ==> default: -> value=-drive, 00:00:49.155 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-0-drive0, 00:00:49.155 ==> default: -> value=-device, 00:00:49.155 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:49.155 ==> default: -> value=-device, 00:00:49.155 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:49.155 ==> default: -> value=-drive, 00:00:49.155 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:49.155 ==> default: -> value=-device, 00:00:49.155 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:49.155 ==> default: -> value=-drive, 00:00:49.155 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:49.155 ==> default: -> value=-device, 00:00:49.155 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:49.155 ==> default: -> value=-drive, 00:00:49.155 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:49.155 ==> default: -> value=-device, 00:00:49.155 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:49.413 ==> default: Creating shared folders metadata... 00:00:49.413 ==> default: Starting domain. 00:00:51.317 ==> default: Waiting for domain to get an IP address... 00:01:09.402 ==> default: Waiting for SSH to become available... 00:01:09.402 ==> default: Configuring and enabling network interfaces... 00:01:11.936 default: SSH address: 192.168.121.182:22 00:01:11.936 default: SSH username: vagrant 00:01:11.936 default: SSH auth method: private key 00:01:13.866 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:22.005 ==> default: Mounting SSHFS shared folder... 00:01:22.941 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:22.941 ==> default: Checking Mount.. 00:01:24.318 ==> default: Folder Successfully Mounted! 00:01:24.318 ==> default: Running provisioner: file... 00:01:25.254 default: ~/.gitconfig => .gitconfig 00:01:25.513 00:01:25.513 SUCCESS! 00:01:25.513 00:01:25.513 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:25.513 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:25.513 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:25.513 00:01:25.522 [Pipeline] } 00:01:25.537 [Pipeline] // stage 00:01:25.546 [Pipeline] dir 00:01:25.547 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:01:25.548 [Pipeline] { 00:01:25.561 [Pipeline] catchError 00:01:25.563 [Pipeline] { 00:01:25.577 [Pipeline] sh 00:01:25.873 + vagrant ssh-config --host vagrant 00:01:25.873 + sed -ne /^Host/,$p 00:01:25.873 + tee ssh_conf 00:01:29.174 Host vagrant 00:01:29.174 HostName 192.168.121.182 00:01:29.174 User vagrant 00:01:29.174 Port 22 00:01:29.174 UserKnownHostsFile /dev/null 00:01:29.174 StrictHostKeyChecking no 00:01:29.174 PasswordAuthentication no 00:01:29.174 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:29.174 IdentitiesOnly yes 00:01:29.174 LogLevel FATAL 00:01:29.174 ForwardAgent yes 00:01:29.174 ForwardX11 yes 00:01:29.174 00:01:29.189 [Pipeline] withEnv 00:01:29.191 [Pipeline] { 00:01:29.207 [Pipeline] sh 00:01:29.486 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:29.486 source /etc/os-release 00:01:29.486 [[ -e /image.version ]] && img=$(< /image.version) 00:01:29.486 # Minimal, systemd-like check. 00:01:29.486 if [[ -e /.dockerenv ]]; then 00:01:29.486 # Clear garbage from the node's name: 00:01:29.486 # agt-er_autotest_547-896 -> autotest_547-896 00:01:29.486 # $HOSTNAME is the actual container id 00:01:29.486 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:29.486 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:29.486 # We can assume this is a mount from a host where container is running, 00:01:29.486 # so fetch its hostname to easily identify the target swarm worker. 00:01:29.486 container="$(< /etc/hostname) ($agent)" 00:01:29.486 else 00:01:29.486 # Fallback 00:01:29.486 container=$agent 00:01:29.486 fi 00:01:29.486 fi 00:01:29.486 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:29.486 00:01:29.756 [Pipeline] } 00:01:29.774 [Pipeline] // withEnv 00:01:29.781 [Pipeline] setCustomBuildProperty 00:01:29.795 [Pipeline] stage 00:01:29.797 [Pipeline] { (Tests) 00:01:29.813 [Pipeline] sh 00:01:30.092 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:30.105 [Pipeline] sh 00:01:30.384 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:30.659 [Pipeline] timeout 00:01:30.660 Timeout set to expire in 1 hr 0 min 00:01:30.662 [Pipeline] { 00:01:30.676 [Pipeline] sh 00:01:30.956 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:31.523 HEAD is now at 4b2d483c6 dif: Add spdk_dif_pi_format_get_pi_size() to use for NVMe PRACT 00:01:31.535 [Pipeline] sh 00:01:31.813 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:32.082 [Pipeline] sh 00:01:32.361 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:32.632 [Pipeline] sh 00:01:32.911 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:01:33.169 ++ readlink -f spdk_repo 00:01:33.169 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:33.169 + [[ -n /home/vagrant/spdk_repo ]] 00:01:33.169 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:33.169 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:33.169 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:33.169 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:33.169 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:33.169 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:01:33.169 + cd /home/vagrant/spdk_repo 00:01:33.169 + source /etc/os-release 00:01:33.169 ++ NAME='Fedora Linux' 00:01:33.169 ++ VERSION='39 (Cloud Edition)' 00:01:33.169 ++ ID=fedora 00:01:33.169 ++ VERSION_ID=39 00:01:33.169 ++ VERSION_CODENAME= 00:01:33.169 ++ PLATFORM_ID=platform:f39 00:01:33.169 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:33.169 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:33.169 ++ LOGO=fedora-logo-icon 00:01:33.169 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:33.169 ++ HOME_URL=https://fedoraproject.org/ 00:01:33.169 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:33.169 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:33.169 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:33.169 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:33.169 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:33.169 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:33.169 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:33.169 ++ SUPPORT_END=2024-11-12 00:01:33.169 ++ VARIANT='Cloud Edition' 00:01:33.169 ++ VARIANT_ID=cloud 00:01:33.169 + uname -a 00:01:33.169 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:33.169 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:33.428 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:33.428 Hugepages 00:01:33.428 node hugesize free / total 00:01:33.428 node0 1048576kB 0 / 0 00:01:33.428 node0 2048kB 0 / 0 00:01:33.428 00:01:33.428 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:33.686 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:33.686 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:33.686 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:33.686 + rm -f /tmp/spdk-ld-path 00:01:33.686 + source autorun-spdk.conf 00:01:33.686 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:33.686 ++ SPDK_TEST_NVMF=1 00:01:33.686 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:33.686 ++ SPDK_TEST_URING=1 00:01:33.686 ++ SPDK_TEST_USDT=1 00:01:33.686 ++ SPDK_RUN_UBSAN=1 00:01:33.686 ++ NET_TYPE=virt 00:01:33.686 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:33.686 ++ RUN_NIGHTLY=0 00:01:33.686 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:33.686 + [[ -n '' ]] 00:01:33.686 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:33.686 + for M in /var/spdk/build-*-manifest.txt 00:01:33.686 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:33.686 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:33.686 + for M in /var/spdk/build-*-manifest.txt 00:01:33.686 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:33.686 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:33.686 + for M in /var/spdk/build-*-manifest.txt 00:01:33.686 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:33.686 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:33.686 ++ uname 00:01:33.686 + [[ Linux == \L\i\n\u\x ]] 00:01:33.686 + sudo dmesg -T 00:01:33.686 + sudo dmesg --clear 00:01:33.686 + dmesg_pid=5200 00:01:33.686 + sudo dmesg -Tw 00:01:33.686 + [[ Fedora Linux == FreeBSD ]] 00:01:33.686 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:33.686 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:33.686 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:33.687 + [[ -x /usr/src/fio-static/fio ]] 00:01:33.687 + export FIO_BIN=/usr/src/fio-static/fio 00:01:33.687 + FIO_BIN=/usr/src/fio-static/fio 00:01:33.687 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:33.687 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:33.687 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:33.687 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:33.687 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:33.687 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:33.687 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:33.687 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:33.687 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:33.945 10:19:34 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:33.945 10:19:34 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:33.945 10:19:34 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:33.945 10:19:34 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:33.945 10:19:34 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:33.945 10:19:34 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_URING=1 00:01:33.945 10:19:34 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_USDT=1 00:01:33.945 10:19:34 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:01:33.945 10:19:34 -- spdk_repo/autorun-spdk.conf@7 -- $ NET_TYPE=virt 00:01:33.945 10:19:34 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:33.945 10:19:34 -- spdk_repo/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:33.945 10:19:34 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:33.945 10:19:34 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:33.945 10:19:34 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:33.945 10:19:34 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:33.945 10:19:34 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:33.945 10:19:34 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:33.945 10:19:34 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:33.945 10:19:34 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:33.946 10:19:34 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:33.946 10:19:34 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:33.946 10:19:34 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:33.946 10:19:34 -- paths/export.sh@5 -- $ export PATH 00:01:33.946 10:19:34 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:33.946 10:19:34 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:33.946 10:19:34 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:33.946 10:19:34 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1731665974.XXXXXX 00:01:33.946 10:19:34 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1731665974.cJ7IME 00:01:33.946 10:19:34 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:33.946 10:19:34 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:33.946 10:19:34 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:33.946 10:19:34 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:33.946 10:19:34 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:33.946 10:19:34 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:33.946 10:19:34 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:33.946 10:19:34 -- common/autotest_common.sh@10 -- $ set +x 00:01:33.946 10:19:34 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:01:33.946 10:19:34 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:33.946 10:19:34 -- pm/common@17 -- $ local monitor 00:01:33.946 10:19:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:33.946 10:19:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:33.946 10:19:34 -- pm/common@25 -- $ sleep 1 00:01:33.946 10:19:34 -- pm/common@21 -- $ date +%s 00:01:33.946 10:19:34 -- pm/common@21 -- $ date +%s 00:01:33.946 10:19:34 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731665974 00:01:33.946 10:19:34 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731665974 00:01:33.946 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731665974_collect-cpu-load.pm.log 00:01:33.946 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731665974_collect-vmstat.pm.log 00:01:34.882 10:19:35 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:34.882 10:19:35 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:34.882 10:19:35 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:34.882 10:19:35 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:34.882 10:19:35 -- spdk/autobuild.sh@16 -- $ date -u 00:01:34.882 Fri Nov 15 10:19:35 AM UTC 2024 00:01:34.882 10:19:35 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:34.882 v25.01-pre-210-g4b2d483c6 00:01:34.882 10:19:35 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:34.882 10:19:35 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:34.882 10:19:35 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:34.882 10:19:35 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:01:34.882 10:19:35 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:34.882 10:19:35 -- common/autotest_common.sh@10 -- $ set +x 00:01:34.882 ************************************ 00:01:34.882 START TEST ubsan 00:01:34.882 ************************************ 00:01:34.882 using ubsan 00:01:34.882 10:19:35 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:01:34.882 00:01:34.882 real 0m0.000s 00:01:34.882 user 0m0.000s 00:01:34.882 sys 0m0.000s 00:01:34.882 10:19:35 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:01:34.882 ************************************ 00:01:34.882 END TEST ubsan 00:01:34.882 10:19:35 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:34.883 ************************************ 00:01:34.883 10:19:35 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:34.883 10:19:35 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:34.883 10:19:35 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:34.883 10:19:35 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:34.883 10:19:35 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:34.883 10:19:35 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:34.883 10:19:35 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:34.883 10:19:35 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:34.883 10:19:35 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:01:35.142 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:35.142 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:35.401 Using 'verbs' RDMA provider 00:01:51.216 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:03.436 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:03.436 Creating mk/config.mk...done. 00:02:03.436 Creating mk/cc.flags.mk...done. 00:02:03.436 Type 'make' to build. 00:02:03.436 10:20:03 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:03.437 10:20:03 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:02:03.437 10:20:03 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:03.437 10:20:03 -- common/autotest_common.sh@10 -- $ set +x 00:02:03.437 ************************************ 00:02:03.437 START TEST make 00:02:03.437 ************************************ 00:02:03.437 10:20:03 make -- common/autotest_common.sh@1127 -- $ make -j10 00:02:03.437 make[1]: Nothing to be done for 'all'. 00:02:15.640 The Meson build system 00:02:15.640 Version: 1.5.0 00:02:15.640 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:15.640 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:15.640 Build type: native build 00:02:15.640 Program cat found: YES (/usr/bin/cat) 00:02:15.640 Project name: DPDK 00:02:15.640 Project version: 24.03.0 00:02:15.640 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:15.640 C linker for the host machine: cc ld.bfd 2.40-14 00:02:15.640 Host machine cpu family: x86_64 00:02:15.640 Host machine cpu: x86_64 00:02:15.640 Message: ## Building in Developer Mode ## 00:02:15.640 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:15.640 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:15.640 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:15.640 Program python3 found: YES (/usr/bin/python3) 00:02:15.640 Program cat found: YES (/usr/bin/cat) 00:02:15.640 Compiler for C supports arguments -march=native: YES 00:02:15.640 Checking for size of "void *" : 8 00:02:15.640 Checking for size of "void *" : 8 (cached) 00:02:15.640 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:15.640 Library m found: YES 00:02:15.640 Library numa found: YES 00:02:15.640 Has header "numaif.h" : YES 00:02:15.640 Library fdt found: NO 00:02:15.640 Library execinfo found: NO 00:02:15.640 Has header "execinfo.h" : YES 00:02:15.641 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:15.641 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:15.641 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:15.641 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:15.641 Run-time dependency openssl found: YES 3.1.1 00:02:15.641 Run-time dependency libpcap found: YES 1.10.4 00:02:15.641 Has header "pcap.h" with dependency libpcap: YES 00:02:15.641 Compiler for C supports arguments -Wcast-qual: YES 00:02:15.641 Compiler for C supports arguments -Wdeprecated: YES 00:02:15.641 Compiler for C supports arguments -Wformat: YES 00:02:15.641 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:15.641 Compiler for C supports arguments -Wformat-security: NO 00:02:15.641 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:15.641 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:15.641 Compiler for C supports arguments -Wnested-externs: YES 00:02:15.641 Compiler for C supports arguments -Wold-style-definition: YES 00:02:15.641 Compiler for C supports arguments -Wpointer-arith: YES 00:02:15.641 Compiler for C supports arguments -Wsign-compare: YES 00:02:15.641 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:15.641 Compiler for C supports arguments -Wundef: YES 00:02:15.641 Compiler for C supports arguments -Wwrite-strings: YES 00:02:15.641 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:15.641 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:15.641 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:15.641 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:15.641 Program objdump found: YES (/usr/bin/objdump) 00:02:15.641 Compiler for C supports arguments -mavx512f: YES 00:02:15.641 Checking if "AVX512 checking" compiles: YES 00:02:15.641 Fetching value of define "__SSE4_2__" : 1 00:02:15.641 Fetching value of define "__AES__" : 1 00:02:15.641 Fetching value of define "__AVX__" : 1 00:02:15.641 Fetching value of define "__AVX2__" : 1 00:02:15.641 Fetching value of define "__AVX512BW__" : (undefined) 00:02:15.641 Fetching value of define "__AVX512CD__" : (undefined) 00:02:15.641 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:15.641 Fetching value of define "__AVX512F__" : (undefined) 00:02:15.641 Fetching value of define "__AVX512VL__" : (undefined) 00:02:15.641 Fetching value of define "__PCLMUL__" : 1 00:02:15.641 Fetching value of define "__RDRND__" : 1 00:02:15.641 Fetching value of define "__RDSEED__" : 1 00:02:15.641 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:15.641 Fetching value of define "__znver1__" : (undefined) 00:02:15.641 Fetching value of define "__znver2__" : (undefined) 00:02:15.641 Fetching value of define "__znver3__" : (undefined) 00:02:15.641 Fetching value of define "__znver4__" : (undefined) 00:02:15.641 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:15.641 Message: lib/log: Defining dependency "log" 00:02:15.641 Message: lib/kvargs: Defining dependency "kvargs" 00:02:15.641 Message: lib/telemetry: Defining dependency "telemetry" 00:02:15.641 Checking for function "getentropy" : NO 00:02:15.641 Message: lib/eal: Defining dependency "eal" 00:02:15.641 Message: lib/ring: Defining dependency "ring" 00:02:15.641 Message: lib/rcu: Defining dependency "rcu" 00:02:15.641 Message: lib/mempool: Defining dependency "mempool" 00:02:15.641 Message: lib/mbuf: Defining dependency "mbuf" 00:02:15.641 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:15.641 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:15.641 Compiler for C supports arguments -mpclmul: YES 00:02:15.641 Compiler for C supports arguments -maes: YES 00:02:15.641 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:15.641 Compiler for C supports arguments -mavx512bw: YES 00:02:15.641 Compiler for C supports arguments -mavx512dq: YES 00:02:15.641 Compiler for C supports arguments -mavx512vl: YES 00:02:15.641 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:15.641 Compiler for C supports arguments -mavx2: YES 00:02:15.641 Compiler for C supports arguments -mavx: YES 00:02:15.641 Message: lib/net: Defining dependency "net" 00:02:15.641 Message: lib/meter: Defining dependency "meter" 00:02:15.641 Message: lib/ethdev: Defining dependency "ethdev" 00:02:15.641 Message: lib/pci: Defining dependency "pci" 00:02:15.641 Message: lib/cmdline: Defining dependency "cmdline" 00:02:15.641 Message: lib/hash: Defining dependency "hash" 00:02:15.641 Message: lib/timer: Defining dependency "timer" 00:02:15.641 Message: lib/compressdev: Defining dependency "compressdev" 00:02:15.641 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:15.641 Message: lib/dmadev: Defining dependency "dmadev" 00:02:15.641 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:15.641 Message: lib/power: Defining dependency "power" 00:02:15.641 Message: lib/reorder: Defining dependency "reorder" 00:02:15.641 Message: lib/security: Defining dependency "security" 00:02:15.641 Has header "linux/userfaultfd.h" : YES 00:02:15.641 Has header "linux/vduse.h" : YES 00:02:15.641 Message: lib/vhost: Defining dependency "vhost" 00:02:15.641 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:15.641 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:15.641 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:15.641 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:15.641 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:15.641 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:15.641 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:15.641 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:15.641 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:15.641 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:15.641 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:15.641 Configuring doxy-api-html.conf using configuration 00:02:15.641 Configuring doxy-api-man.conf using configuration 00:02:15.641 Program mandb found: YES (/usr/bin/mandb) 00:02:15.641 Program sphinx-build found: NO 00:02:15.641 Configuring rte_build_config.h using configuration 00:02:15.641 Message: 00:02:15.641 ================= 00:02:15.641 Applications Enabled 00:02:15.641 ================= 00:02:15.641 00:02:15.641 apps: 00:02:15.641 00:02:15.641 00:02:15.641 Message: 00:02:15.641 ================= 00:02:15.641 Libraries Enabled 00:02:15.641 ================= 00:02:15.641 00:02:15.641 libs: 00:02:15.641 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:15.641 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:15.641 cryptodev, dmadev, power, reorder, security, vhost, 00:02:15.641 00:02:15.641 Message: 00:02:15.641 =============== 00:02:15.641 Drivers Enabled 00:02:15.641 =============== 00:02:15.641 00:02:15.641 common: 00:02:15.641 00:02:15.641 bus: 00:02:15.641 pci, vdev, 00:02:15.641 mempool: 00:02:15.641 ring, 00:02:15.641 dma: 00:02:15.641 00:02:15.641 net: 00:02:15.641 00:02:15.641 crypto: 00:02:15.641 00:02:15.641 compress: 00:02:15.641 00:02:15.641 vdpa: 00:02:15.641 00:02:15.641 00:02:15.641 Message: 00:02:15.641 ================= 00:02:15.641 Content Skipped 00:02:15.641 ================= 00:02:15.641 00:02:15.641 apps: 00:02:15.641 dumpcap: explicitly disabled via build config 00:02:15.641 graph: explicitly disabled via build config 00:02:15.641 pdump: explicitly disabled via build config 00:02:15.641 proc-info: explicitly disabled via build config 00:02:15.641 test-acl: explicitly disabled via build config 00:02:15.641 test-bbdev: explicitly disabled via build config 00:02:15.641 test-cmdline: explicitly disabled via build config 00:02:15.641 test-compress-perf: explicitly disabled via build config 00:02:15.641 test-crypto-perf: explicitly disabled via build config 00:02:15.641 test-dma-perf: explicitly disabled via build config 00:02:15.641 test-eventdev: explicitly disabled via build config 00:02:15.641 test-fib: explicitly disabled via build config 00:02:15.641 test-flow-perf: explicitly disabled via build config 00:02:15.641 test-gpudev: explicitly disabled via build config 00:02:15.641 test-mldev: explicitly disabled via build config 00:02:15.641 test-pipeline: explicitly disabled via build config 00:02:15.641 test-pmd: explicitly disabled via build config 00:02:15.641 test-regex: explicitly disabled via build config 00:02:15.641 test-sad: explicitly disabled via build config 00:02:15.641 test-security-perf: explicitly disabled via build config 00:02:15.641 00:02:15.641 libs: 00:02:15.641 argparse: explicitly disabled via build config 00:02:15.641 metrics: explicitly disabled via build config 00:02:15.641 acl: explicitly disabled via build config 00:02:15.641 bbdev: explicitly disabled via build config 00:02:15.641 bitratestats: explicitly disabled via build config 00:02:15.641 bpf: explicitly disabled via build config 00:02:15.641 cfgfile: explicitly disabled via build config 00:02:15.641 distributor: explicitly disabled via build config 00:02:15.641 efd: explicitly disabled via build config 00:02:15.641 eventdev: explicitly disabled via build config 00:02:15.641 dispatcher: explicitly disabled via build config 00:02:15.641 gpudev: explicitly disabled via build config 00:02:15.641 gro: explicitly disabled via build config 00:02:15.641 gso: explicitly disabled via build config 00:02:15.641 ip_frag: explicitly disabled via build config 00:02:15.641 jobstats: explicitly disabled via build config 00:02:15.642 latencystats: explicitly disabled via build config 00:02:15.642 lpm: explicitly disabled via build config 00:02:15.642 member: explicitly disabled via build config 00:02:15.642 pcapng: explicitly disabled via build config 00:02:15.642 rawdev: explicitly disabled via build config 00:02:15.642 regexdev: explicitly disabled via build config 00:02:15.642 mldev: explicitly disabled via build config 00:02:15.642 rib: explicitly disabled via build config 00:02:15.642 sched: explicitly disabled via build config 00:02:15.642 stack: explicitly disabled via build config 00:02:15.642 ipsec: explicitly disabled via build config 00:02:15.642 pdcp: explicitly disabled via build config 00:02:15.642 fib: explicitly disabled via build config 00:02:15.642 port: explicitly disabled via build config 00:02:15.642 pdump: explicitly disabled via build config 00:02:15.642 table: explicitly disabled via build config 00:02:15.642 pipeline: explicitly disabled via build config 00:02:15.642 graph: explicitly disabled via build config 00:02:15.642 node: explicitly disabled via build config 00:02:15.642 00:02:15.642 drivers: 00:02:15.642 common/cpt: not in enabled drivers build config 00:02:15.642 common/dpaax: not in enabled drivers build config 00:02:15.642 common/iavf: not in enabled drivers build config 00:02:15.642 common/idpf: not in enabled drivers build config 00:02:15.642 common/ionic: not in enabled drivers build config 00:02:15.642 common/mvep: not in enabled drivers build config 00:02:15.642 common/octeontx: not in enabled drivers build config 00:02:15.642 bus/auxiliary: not in enabled drivers build config 00:02:15.642 bus/cdx: not in enabled drivers build config 00:02:15.642 bus/dpaa: not in enabled drivers build config 00:02:15.642 bus/fslmc: not in enabled drivers build config 00:02:15.642 bus/ifpga: not in enabled drivers build config 00:02:15.642 bus/platform: not in enabled drivers build config 00:02:15.642 bus/uacce: not in enabled drivers build config 00:02:15.642 bus/vmbus: not in enabled drivers build config 00:02:15.642 common/cnxk: not in enabled drivers build config 00:02:15.642 common/mlx5: not in enabled drivers build config 00:02:15.642 common/nfp: not in enabled drivers build config 00:02:15.642 common/nitrox: not in enabled drivers build config 00:02:15.642 common/qat: not in enabled drivers build config 00:02:15.642 common/sfc_efx: not in enabled drivers build config 00:02:15.642 mempool/bucket: not in enabled drivers build config 00:02:15.642 mempool/cnxk: not in enabled drivers build config 00:02:15.642 mempool/dpaa: not in enabled drivers build config 00:02:15.642 mempool/dpaa2: not in enabled drivers build config 00:02:15.642 mempool/octeontx: not in enabled drivers build config 00:02:15.642 mempool/stack: not in enabled drivers build config 00:02:15.642 dma/cnxk: not in enabled drivers build config 00:02:15.642 dma/dpaa: not in enabled drivers build config 00:02:15.642 dma/dpaa2: not in enabled drivers build config 00:02:15.642 dma/hisilicon: not in enabled drivers build config 00:02:15.642 dma/idxd: not in enabled drivers build config 00:02:15.642 dma/ioat: not in enabled drivers build config 00:02:15.642 dma/skeleton: not in enabled drivers build config 00:02:15.642 net/af_packet: not in enabled drivers build config 00:02:15.642 net/af_xdp: not in enabled drivers build config 00:02:15.642 net/ark: not in enabled drivers build config 00:02:15.642 net/atlantic: not in enabled drivers build config 00:02:15.642 net/avp: not in enabled drivers build config 00:02:15.642 net/axgbe: not in enabled drivers build config 00:02:15.642 net/bnx2x: not in enabled drivers build config 00:02:15.642 net/bnxt: not in enabled drivers build config 00:02:15.642 net/bonding: not in enabled drivers build config 00:02:15.642 net/cnxk: not in enabled drivers build config 00:02:15.642 net/cpfl: not in enabled drivers build config 00:02:15.642 net/cxgbe: not in enabled drivers build config 00:02:15.642 net/dpaa: not in enabled drivers build config 00:02:15.642 net/dpaa2: not in enabled drivers build config 00:02:15.642 net/e1000: not in enabled drivers build config 00:02:15.642 net/ena: not in enabled drivers build config 00:02:15.642 net/enetc: not in enabled drivers build config 00:02:15.642 net/enetfec: not in enabled drivers build config 00:02:15.642 net/enic: not in enabled drivers build config 00:02:15.642 net/failsafe: not in enabled drivers build config 00:02:15.642 net/fm10k: not in enabled drivers build config 00:02:15.642 net/gve: not in enabled drivers build config 00:02:15.642 net/hinic: not in enabled drivers build config 00:02:15.642 net/hns3: not in enabled drivers build config 00:02:15.642 net/i40e: not in enabled drivers build config 00:02:15.642 net/iavf: not in enabled drivers build config 00:02:15.642 net/ice: not in enabled drivers build config 00:02:15.642 net/idpf: not in enabled drivers build config 00:02:15.642 net/igc: not in enabled drivers build config 00:02:15.642 net/ionic: not in enabled drivers build config 00:02:15.642 net/ipn3ke: not in enabled drivers build config 00:02:15.642 net/ixgbe: not in enabled drivers build config 00:02:15.642 net/mana: not in enabled drivers build config 00:02:15.642 net/memif: not in enabled drivers build config 00:02:15.642 net/mlx4: not in enabled drivers build config 00:02:15.642 net/mlx5: not in enabled drivers build config 00:02:15.642 net/mvneta: not in enabled drivers build config 00:02:15.642 net/mvpp2: not in enabled drivers build config 00:02:15.642 net/netvsc: not in enabled drivers build config 00:02:15.642 net/nfb: not in enabled drivers build config 00:02:15.642 net/nfp: not in enabled drivers build config 00:02:15.642 net/ngbe: not in enabled drivers build config 00:02:15.642 net/null: not in enabled drivers build config 00:02:15.642 net/octeontx: not in enabled drivers build config 00:02:15.642 net/octeon_ep: not in enabled drivers build config 00:02:15.642 net/pcap: not in enabled drivers build config 00:02:15.642 net/pfe: not in enabled drivers build config 00:02:15.642 net/qede: not in enabled drivers build config 00:02:15.642 net/ring: not in enabled drivers build config 00:02:15.642 net/sfc: not in enabled drivers build config 00:02:15.642 net/softnic: not in enabled drivers build config 00:02:15.642 net/tap: not in enabled drivers build config 00:02:15.642 net/thunderx: not in enabled drivers build config 00:02:15.642 net/txgbe: not in enabled drivers build config 00:02:15.642 net/vdev_netvsc: not in enabled drivers build config 00:02:15.642 net/vhost: not in enabled drivers build config 00:02:15.642 net/virtio: not in enabled drivers build config 00:02:15.642 net/vmxnet3: not in enabled drivers build config 00:02:15.642 raw/*: missing internal dependency, "rawdev" 00:02:15.642 crypto/armv8: not in enabled drivers build config 00:02:15.642 crypto/bcmfs: not in enabled drivers build config 00:02:15.642 crypto/caam_jr: not in enabled drivers build config 00:02:15.642 crypto/ccp: not in enabled drivers build config 00:02:15.642 crypto/cnxk: not in enabled drivers build config 00:02:15.642 crypto/dpaa_sec: not in enabled drivers build config 00:02:15.642 crypto/dpaa2_sec: not in enabled drivers build config 00:02:15.642 crypto/ipsec_mb: not in enabled drivers build config 00:02:15.642 crypto/mlx5: not in enabled drivers build config 00:02:15.642 crypto/mvsam: not in enabled drivers build config 00:02:15.642 crypto/nitrox: not in enabled drivers build config 00:02:15.642 crypto/null: not in enabled drivers build config 00:02:15.642 crypto/octeontx: not in enabled drivers build config 00:02:15.642 crypto/openssl: not in enabled drivers build config 00:02:15.642 crypto/scheduler: not in enabled drivers build config 00:02:15.642 crypto/uadk: not in enabled drivers build config 00:02:15.642 crypto/virtio: not in enabled drivers build config 00:02:15.642 compress/isal: not in enabled drivers build config 00:02:15.642 compress/mlx5: not in enabled drivers build config 00:02:15.642 compress/nitrox: not in enabled drivers build config 00:02:15.642 compress/octeontx: not in enabled drivers build config 00:02:15.642 compress/zlib: not in enabled drivers build config 00:02:15.642 regex/*: missing internal dependency, "regexdev" 00:02:15.642 ml/*: missing internal dependency, "mldev" 00:02:15.642 vdpa/ifc: not in enabled drivers build config 00:02:15.642 vdpa/mlx5: not in enabled drivers build config 00:02:15.642 vdpa/nfp: not in enabled drivers build config 00:02:15.642 vdpa/sfc: not in enabled drivers build config 00:02:15.642 event/*: missing internal dependency, "eventdev" 00:02:15.642 baseband/*: missing internal dependency, "bbdev" 00:02:15.642 gpu/*: missing internal dependency, "gpudev" 00:02:15.642 00:02:15.642 00:02:15.642 Build targets in project: 85 00:02:15.642 00:02:15.642 DPDK 24.03.0 00:02:15.642 00:02:15.642 User defined options 00:02:15.642 buildtype : debug 00:02:15.642 default_library : shared 00:02:15.642 libdir : lib 00:02:15.642 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:15.642 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:15.642 c_link_args : 00:02:15.642 cpu_instruction_set: native 00:02:15.642 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:15.642 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:15.642 enable_docs : false 00:02:15.642 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:15.642 enable_kmods : false 00:02:15.642 max_lcores : 128 00:02:15.642 tests : false 00:02:15.642 00:02:15.642 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:15.643 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:15.643 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:15.643 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:15.643 [3/268] Linking static target lib/librte_kvargs.a 00:02:15.643 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:15.643 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:15.643 [6/268] Linking static target lib/librte_log.a 00:02:15.901 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.901 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:15.901 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:16.159 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:16.159 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:16.159 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:16.159 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:16.159 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:16.159 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:16.418 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:16.418 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:16.418 [18/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.418 [19/268] Linking static target lib/librte_telemetry.a 00:02:16.418 [20/268] Linking target lib/librte_log.so.24.1 00:02:16.676 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:16.676 [22/268] Linking target lib/librte_kvargs.so.24.1 00:02:16.933 [23/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:17.191 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:17.191 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:17.191 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:17.191 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:17.191 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:17.191 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:17.191 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:17.191 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:17.191 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:17.191 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:17.191 [34/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.449 [35/268] Linking target lib/librte_telemetry.so.24.1 00:02:17.707 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:17.707 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:17.966 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:17.966 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:17.966 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:17.966 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:17.966 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:17.966 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:18.225 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:18.225 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:18.225 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:18.225 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:18.225 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:18.225 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:18.483 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:18.483 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:19.047 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:19.047 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:19.047 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:19.047 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:19.304 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:19.304 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:19.304 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:19.304 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:19.304 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:19.304 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:19.304 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:19.869 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:19.869 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:19.869 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:19.869 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:20.127 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:20.127 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:20.385 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:20.385 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:20.385 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:20.385 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:20.385 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:20.385 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:20.644 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:20.644 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:20.902 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:20.902 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:20.902 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:20.902 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:20.902 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:21.469 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:21.469 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:21.469 [84/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:21.469 [85/268] Linking static target lib/librte_rcu.a 00:02:21.469 [86/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:21.469 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:21.469 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:21.469 [89/268] Linking static target lib/librte_eal.a 00:02:21.469 [90/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:21.469 [91/268] Linking static target lib/librte_ring.a 00:02:21.727 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:21.727 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:21.989 [94/268] Linking static target lib/librte_mempool.a 00:02:21.989 [95/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:21.989 [96/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:21.989 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:21.989 [98/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.989 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:21.989 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:21.989 [101/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.255 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:22.255 [103/268] Linking static target lib/librte_mbuf.a 00:02:22.514 [104/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:22.514 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:22.514 [106/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:22.773 [107/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:22.773 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:22.773 [109/268] Linking static target lib/librte_meter.a 00:02:22.774 [110/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:22.774 [111/268] Linking static target lib/librte_net.a 00:02:23.032 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:23.032 [113/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.291 [114/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.291 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:23.291 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:23.291 [117/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.291 [118/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.549 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:23.809 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:23.809 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:24.067 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:24.325 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:24.325 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:24.325 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:24.325 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:24.325 [127/268] Linking static target lib/librte_pci.a 00:02:24.325 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:24.325 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:24.584 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:24.584 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:24.584 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:24.584 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:24.584 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:24.584 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:24.843 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:24.843 [137/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.843 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:24.843 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:24.843 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:24.843 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:24.843 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:24.843 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:25.102 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:25.102 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:25.102 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:25.102 [147/268] Linking static target lib/librte_cmdline.a 00:02:25.102 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:25.670 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:25.670 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:25.670 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:25.670 [152/268] Linking static target lib/librte_timer.a 00:02:25.670 [153/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:25.670 [154/268] Linking static target lib/librte_ethdev.a 00:02:25.670 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:25.930 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:25.930 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:25.930 [158/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:25.930 [159/268] Linking static target lib/librte_hash.a 00:02:26.189 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:26.189 [161/268] Linking static target lib/librte_compressdev.a 00:02:26.189 [162/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.448 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:26.448 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:26.448 [165/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:26.708 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:26.708 [167/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.708 [168/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:26.708 [169/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:26.708 [170/268] Linking static target lib/librte_dmadev.a 00:02:26.967 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:26.967 [172/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:26.967 [173/268] Linking static target lib/librte_cryptodev.a 00:02:27.226 [174/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:27.226 [175/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.226 [176/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:27.226 [177/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.485 [178/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:27.485 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:27.744 [180/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.744 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:27.744 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:27.744 [183/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:27.744 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:28.004 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:28.004 [186/268] Linking static target lib/librte_power.a 00:02:28.263 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:28.263 [188/268] Linking static target lib/librte_reorder.a 00:02:28.521 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:28.521 [190/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:28.521 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:28.521 [192/268] Linking static target lib/librte_security.a 00:02:28.521 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:28.779 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.038 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:29.297 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.297 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.297 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:29.555 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:29.555 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:29.555 [201/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.555 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:29.814 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:29.814 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:30.072 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:30.072 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:30.331 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:30.331 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:30.331 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:30.331 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:30.331 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:30.591 [212/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:30.591 [213/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:30.591 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:30.591 [215/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:30.591 [216/268] Linking static target drivers/librte_bus_vdev.a 00:02:30.591 [217/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:30.591 [218/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:30.591 [219/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:30.591 [220/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:30.591 [221/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:30.591 [222/268] Linking static target drivers/librte_bus_pci.a 00:02:30.895 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:30.895 [224/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.896 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:30.896 [226/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:30.896 [227/268] Linking static target drivers/librte_mempool_ring.a 00:02:31.155 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.722 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:31.722 [230/268] Linking static target lib/librte_vhost.a 00:02:32.657 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.657 [232/268] Linking target lib/librte_eal.so.24.1 00:02:32.657 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:32.657 [234/268] Linking target lib/librte_ring.so.24.1 00:02:32.657 [235/268] Linking target lib/librte_timer.so.24.1 00:02:32.657 [236/268] Linking target lib/librte_meter.so.24.1 00:02:32.657 [237/268] Linking target lib/librte_pci.so.24.1 00:02:32.657 [238/268] Linking target lib/librte_dmadev.so.24.1 00:02:32.657 [239/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:32.916 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:32.916 [241/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:32.916 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:32.916 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:32.916 [244/268] Linking target lib/librte_rcu.so.24.1 00:02:32.916 [245/268] Linking target lib/librte_mempool.so.24.1 00:02:32.916 [246/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:32.916 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:32.916 [248/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.916 [249/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:33.174 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:33.174 [251/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:33.174 [252/268] Linking target lib/librte_mbuf.so.24.1 00:02:33.174 [253/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:33.174 [254/268] Linking target lib/librte_reorder.so.24.1 00:02:33.432 [255/268] Linking target lib/librte_net.so.24.1 00:02:33.432 [256/268] Linking target lib/librte_compressdev.so.24.1 00:02:33.432 [257/268] Linking target lib/librte_cryptodev.so.24.1 00:02:33.432 [258/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.432 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:33.432 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:33.432 [261/268] Linking target lib/librte_hash.so.24.1 00:02:33.432 [262/268] Linking target lib/librte_cmdline.so.24.1 00:02:33.432 [263/268] Linking target lib/librte_security.so.24.1 00:02:33.432 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:33.691 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:33.691 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:33.691 [267/268] Linking target lib/librte_power.so.24.1 00:02:33.691 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:33.691 INFO: autodetecting backend as ninja 00:02:33.691 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:00.233 CC lib/log/log.o 00:03:00.233 CC lib/log/log_flags.o 00:03:00.233 CC lib/ut/ut.o 00:03:00.233 CC lib/log/log_deprecated.o 00:03:00.233 CC lib/ut_mock/mock.o 00:03:00.233 LIB libspdk_log.a 00:03:00.233 LIB libspdk_ut.a 00:03:00.233 LIB libspdk_ut_mock.a 00:03:00.233 SO libspdk_ut.so.2.0 00:03:00.233 SO libspdk_log.so.7.1 00:03:00.233 SO libspdk_ut_mock.so.6.0 00:03:00.233 SYMLINK libspdk_ut.so 00:03:00.233 SYMLINK libspdk_ut_mock.so 00:03:00.233 SYMLINK libspdk_log.so 00:03:00.233 CC lib/dma/dma.o 00:03:00.233 CC lib/util/bit_array.o 00:03:00.233 CC lib/util/base64.o 00:03:00.233 CC lib/util/cpuset.o 00:03:00.233 CC lib/util/crc16.o 00:03:00.233 CC lib/util/crc32.o 00:03:00.233 CC lib/util/crc32c.o 00:03:00.233 CXX lib/trace_parser/trace.o 00:03:00.233 CC lib/ioat/ioat.o 00:03:00.233 CC lib/vfio_user/host/vfio_user_pci.o 00:03:00.233 CC lib/vfio_user/host/vfio_user.o 00:03:00.233 CC lib/util/crc32_ieee.o 00:03:00.233 CC lib/util/crc64.o 00:03:00.233 CC lib/util/dif.o 00:03:00.233 CC lib/util/fd.o 00:03:00.233 LIB libspdk_dma.a 00:03:00.233 CC lib/util/fd_group.o 00:03:00.233 SO libspdk_dma.so.5.0 00:03:00.233 LIB libspdk_ioat.a 00:03:00.233 CC lib/util/file.o 00:03:00.233 SYMLINK libspdk_dma.so 00:03:00.233 CC lib/util/hexlify.o 00:03:00.233 CC lib/util/iov.o 00:03:00.233 SO libspdk_ioat.so.7.0 00:03:00.233 CC lib/util/math.o 00:03:00.233 CC lib/util/net.o 00:03:00.233 LIB libspdk_vfio_user.a 00:03:00.233 SYMLINK libspdk_ioat.so 00:03:00.233 CC lib/util/pipe.o 00:03:00.233 SO libspdk_vfio_user.so.5.0 00:03:00.233 CC lib/util/strerror_tls.o 00:03:00.233 CC lib/util/string.o 00:03:00.233 SYMLINK libspdk_vfio_user.so 00:03:00.233 CC lib/util/uuid.o 00:03:00.233 CC lib/util/xor.o 00:03:00.233 CC lib/util/zipf.o 00:03:00.233 CC lib/util/md5.o 00:03:00.233 LIB libspdk_util.a 00:03:00.233 SO libspdk_util.so.10.1 00:03:00.233 LIB libspdk_trace_parser.a 00:03:00.233 SYMLINK libspdk_util.so 00:03:00.233 SO libspdk_trace_parser.so.6.0 00:03:00.233 SYMLINK libspdk_trace_parser.so 00:03:00.233 CC lib/json/json_parse.o 00:03:00.233 CC lib/idxd/idxd.o 00:03:00.233 CC lib/idxd/idxd_user.o 00:03:00.233 CC lib/json/json_util.o 00:03:00.233 CC lib/idxd/idxd_kernel.o 00:03:00.233 CC lib/rdma_utils/rdma_utils.o 00:03:00.233 CC lib/json/json_write.o 00:03:00.233 CC lib/vmd/vmd.o 00:03:00.233 CC lib/env_dpdk/env.o 00:03:00.233 CC lib/conf/conf.o 00:03:00.233 CC lib/env_dpdk/memory.o 00:03:00.233 CC lib/vmd/led.o 00:03:00.233 CC lib/env_dpdk/pci.o 00:03:00.233 CC lib/env_dpdk/init.o 00:03:00.233 LIB libspdk_conf.a 00:03:00.233 SO libspdk_conf.so.6.0 00:03:00.233 LIB libspdk_rdma_utils.a 00:03:00.233 LIB libspdk_json.a 00:03:00.233 SO libspdk_rdma_utils.so.1.0 00:03:00.233 SYMLINK libspdk_conf.so 00:03:00.233 CC lib/env_dpdk/threads.o 00:03:00.233 SO libspdk_json.so.6.0 00:03:00.233 SYMLINK libspdk_rdma_utils.so 00:03:00.233 CC lib/env_dpdk/pci_ioat.o 00:03:00.233 CC lib/env_dpdk/pci_virtio.o 00:03:00.233 SYMLINK libspdk_json.so 00:03:00.233 CC lib/env_dpdk/pci_vmd.o 00:03:00.233 CC lib/env_dpdk/pci_idxd.o 00:03:00.233 CC lib/env_dpdk/pci_event.o 00:03:00.233 CC lib/env_dpdk/sigbus_handler.o 00:03:00.233 CC lib/env_dpdk/pci_dpdk.o 00:03:00.233 LIB libspdk_idxd.a 00:03:00.233 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:00.233 SO libspdk_idxd.so.12.1 00:03:00.233 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:00.233 LIB libspdk_vmd.a 00:03:00.233 SO libspdk_vmd.so.6.0 00:03:00.233 SYMLINK libspdk_idxd.so 00:03:00.233 SYMLINK libspdk_vmd.so 00:03:00.233 CC lib/rdma_provider/common.o 00:03:00.233 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:00.233 CC lib/jsonrpc/jsonrpc_server.o 00:03:00.233 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:00.233 CC lib/jsonrpc/jsonrpc_client.o 00:03:00.233 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:00.233 LIB libspdk_rdma_provider.a 00:03:00.233 SO libspdk_rdma_provider.so.7.0 00:03:00.233 SYMLINK libspdk_rdma_provider.so 00:03:00.233 LIB libspdk_jsonrpc.a 00:03:00.233 SO libspdk_jsonrpc.so.6.0 00:03:00.233 SYMLINK libspdk_jsonrpc.so 00:03:00.233 LIB libspdk_env_dpdk.a 00:03:00.233 SO libspdk_env_dpdk.so.15.1 00:03:00.233 CC lib/rpc/rpc.o 00:03:00.233 SYMLINK libspdk_env_dpdk.so 00:03:00.233 LIB libspdk_rpc.a 00:03:00.233 SO libspdk_rpc.so.6.0 00:03:00.491 SYMLINK libspdk_rpc.so 00:03:00.749 CC lib/trace/trace.o 00:03:00.749 CC lib/trace/trace_rpc.o 00:03:00.749 CC lib/trace/trace_flags.o 00:03:00.749 CC lib/notify/notify.o 00:03:00.749 CC lib/notify/notify_rpc.o 00:03:00.749 CC lib/keyring/keyring.o 00:03:00.749 CC lib/keyring/keyring_rpc.o 00:03:00.749 LIB libspdk_notify.a 00:03:01.008 SO libspdk_notify.so.6.0 00:03:01.008 LIB libspdk_keyring.a 00:03:01.008 SYMLINK libspdk_notify.so 00:03:01.008 LIB libspdk_trace.a 00:03:01.008 SO libspdk_keyring.so.2.0 00:03:01.008 SO libspdk_trace.so.11.0 00:03:01.008 SYMLINK libspdk_keyring.so 00:03:01.008 SYMLINK libspdk_trace.so 00:03:01.267 CC lib/thread/thread.o 00:03:01.267 CC lib/thread/iobuf.o 00:03:01.267 CC lib/sock/sock.o 00:03:01.267 CC lib/sock/sock_rpc.o 00:03:01.834 LIB libspdk_sock.a 00:03:01.834 SO libspdk_sock.so.10.0 00:03:02.093 SYMLINK libspdk_sock.so 00:03:02.352 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:02.352 CC lib/nvme/nvme_ctrlr.o 00:03:02.352 CC lib/nvme/nvme_ns_cmd.o 00:03:02.352 CC lib/nvme/nvme_fabric.o 00:03:02.352 CC lib/nvme/nvme_ns.o 00:03:02.352 CC lib/nvme/nvme_qpair.o 00:03:02.352 CC lib/nvme/nvme_pcie_common.o 00:03:02.352 CC lib/nvme/nvme_pcie.o 00:03:02.352 CC lib/nvme/nvme.o 00:03:03.287 CC lib/nvme/nvme_quirks.o 00:03:03.287 LIB libspdk_thread.a 00:03:03.287 CC lib/nvme/nvme_transport.o 00:03:03.287 SO libspdk_thread.so.11.0 00:03:03.287 CC lib/nvme/nvme_discovery.o 00:03:03.287 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:03.287 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:03.287 SYMLINK libspdk_thread.so 00:03:03.287 CC lib/nvme/nvme_tcp.o 00:03:03.287 CC lib/nvme/nvme_opal.o 00:03:03.287 CC lib/nvme/nvme_io_msg.o 00:03:03.546 CC lib/nvme/nvme_poll_group.o 00:03:03.804 CC lib/nvme/nvme_zns.o 00:03:03.804 CC lib/nvme/nvme_stubs.o 00:03:03.804 CC lib/nvme/nvme_auth.o 00:03:03.804 CC lib/nvme/nvme_cuse.o 00:03:03.804 CC lib/nvme/nvme_rdma.o 00:03:04.062 CC lib/accel/accel.o 00:03:04.062 CC lib/accel/accel_rpc.o 00:03:04.062 CC lib/blob/blobstore.o 00:03:04.319 CC lib/blob/request.o 00:03:04.319 CC lib/accel/accel_sw.o 00:03:04.577 CC lib/blob/zeroes.o 00:03:04.577 CC lib/init/json_config.o 00:03:04.577 CC lib/init/subsystem.o 00:03:04.836 CC lib/init/subsystem_rpc.o 00:03:04.836 CC lib/blob/blob_bs_dev.o 00:03:04.836 CC lib/init/rpc.o 00:03:04.836 CC lib/virtio/virtio.o 00:03:04.836 CC lib/virtio/virtio_vhost_user.o 00:03:04.836 CC lib/virtio/virtio_vfio_user.o 00:03:04.836 CC lib/virtio/virtio_pci.o 00:03:05.095 CC lib/fsdev/fsdev.o 00:03:05.095 CC lib/fsdev/fsdev_io.o 00:03:05.095 LIB libspdk_init.a 00:03:05.095 SO libspdk_init.so.6.0 00:03:05.095 SYMLINK libspdk_init.so 00:03:05.095 CC lib/fsdev/fsdev_rpc.o 00:03:05.095 LIB libspdk_accel.a 00:03:05.435 SO libspdk_accel.so.16.0 00:03:05.435 LIB libspdk_nvme.a 00:03:05.435 LIB libspdk_virtio.a 00:03:05.435 SO libspdk_virtio.so.7.0 00:03:05.435 SYMLINK libspdk_accel.so 00:03:05.435 CC lib/event/app.o 00:03:05.435 CC lib/event/reactor.o 00:03:05.435 CC lib/event/scheduler_static.o 00:03:05.435 CC lib/event/app_rpc.o 00:03:05.435 CC lib/event/log_rpc.o 00:03:05.435 SYMLINK libspdk_virtio.so 00:03:05.435 SO libspdk_nvme.so.15.0 00:03:05.435 CC lib/bdev/bdev.o 00:03:05.435 CC lib/bdev/bdev_rpc.o 00:03:05.435 CC lib/bdev/bdev_zone.o 00:03:05.435 CC lib/bdev/part.o 00:03:05.700 LIB libspdk_fsdev.a 00:03:05.701 CC lib/bdev/scsi_nvme.o 00:03:05.701 SO libspdk_fsdev.so.2.0 00:03:05.701 SYMLINK libspdk_nvme.so 00:03:05.701 SYMLINK libspdk_fsdev.so 00:03:05.959 LIB libspdk_event.a 00:03:05.959 SO libspdk_event.so.14.0 00:03:05.959 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:05.959 SYMLINK libspdk_event.so 00:03:06.527 LIB libspdk_fuse_dispatcher.a 00:03:06.527 SO libspdk_fuse_dispatcher.so.1.0 00:03:06.527 SYMLINK libspdk_fuse_dispatcher.so 00:03:07.094 LIB libspdk_blob.a 00:03:07.094 SO libspdk_blob.so.11.0 00:03:07.353 SYMLINK libspdk_blob.so 00:03:07.612 CC lib/blobfs/blobfs.o 00:03:07.612 CC lib/blobfs/tree.o 00:03:07.612 CC lib/lvol/lvol.o 00:03:08.179 LIB libspdk_bdev.a 00:03:08.437 SO libspdk_bdev.so.17.0 00:03:08.437 SYMLINK libspdk_bdev.so 00:03:08.437 LIB libspdk_blobfs.a 00:03:08.695 SO libspdk_blobfs.so.10.0 00:03:08.695 SYMLINK libspdk_blobfs.so 00:03:08.695 CC lib/nvmf/ctrlr.o 00:03:08.695 CC lib/nvmf/ctrlr_discovery.o 00:03:08.695 CC lib/nvmf/ctrlr_bdev.o 00:03:08.695 CC lib/nvmf/nvmf.o 00:03:08.695 CC lib/nvmf/subsystem.o 00:03:08.695 CC lib/nbd/nbd.o 00:03:08.695 CC lib/ftl/ftl_core.o 00:03:08.695 CC lib/scsi/dev.o 00:03:08.695 CC lib/ublk/ublk.o 00:03:08.695 LIB libspdk_lvol.a 00:03:08.695 SO libspdk_lvol.so.10.0 00:03:08.695 SYMLINK libspdk_lvol.so 00:03:08.695 CC lib/scsi/lun.o 00:03:08.954 CC lib/nbd/nbd_rpc.o 00:03:09.213 CC lib/nvmf/nvmf_rpc.o 00:03:09.213 CC lib/ftl/ftl_init.o 00:03:09.213 LIB libspdk_nbd.a 00:03:09.213 CC lib/scsi/port.o 00:03:09.213 CC lib/ftl/ftl_layout.o 00:03:09.213 SO libspdk_nbd.so.7.0 00:03:09.213 SYMLINK libspdk_nbd.so 00:03:09.213 CC lib/scsi/scsi.o 00:03:09.213 CC lib/ftl/ftl_debug.o 00:03:09.472 CC lib/ftl/ftl_io.o 00:03:09.472 CC lib/ublk/ublk_rpc.o 00:03:09.472 CC lib/scsi/scsi_bdev.o 00:03:09.472 CC lib/nvmf/transport.o 00:03:09.472 CC lib/nvmf/tcp.o 00:03:09.472 LIB libspdk_ublk.a 00:03:09.472 CC lib/ftl/ftl_sb.o 00:03:09.472 SO libspdk_ublk.so.3.0 00:03:09.731 CC lib/ftl/ftl_l2p.o 00:03:09.731 CC lib/scsi/scsi_pr.o 00:03:09.731 SYMLINK libspdk_ublk.so 00:03:09.731 CC lib/scsi/scsi_rpc.o 00:03:09.731 CC lib/scsi/task.o 00:03:09.731 CC lib/nvmf/stubs.o 00:03:09.731 CC lib/ftl/ftl_l2p_flat.o 00:03:09.989 CC lib/nvmf/mdns_server.o 00:03:09.989 CC lib/nvmf/rdma.o 00:03:09.989 CC lib/ftl/ftl_nv_cache.o 00:03:09.989 LIB libspdk_scsi.a 00:03:09.989 CC lib/nvmf/auth.o 00:03:09.989 CC lib/ftl/ftl_band.o 00:03:09.989 SO libspdk_scsi.so.9.0 00:03:10.249 SYMLINK libspdk_scsi.so 00:03:10.249 CC lib/ftl/ftl_band_ops.o 00:03:10.249 CC lib/ftl/ftl_writer.o 00:03:10.249 CC lib/ftl/ftl_rq.o 00:03:10.249 CC lib/ftl/ftl_reloc.o 00:03:10.508 CC lib/ftl/ftl_l2p_cache.o 00:03:10.508 CC lib/ftl/ftl_p2l.o 00:03:10.508 CC lib/ftl/ftl_p2l_log.o 00:03:10.508 CC lib/ftl/mngt/ftl_mngt.o 00:03:10.767 CC lib/iscsi/conn.o 00:03:10.767 CC lib/iscsi/init_grp.o 00:03:10.767 CC lib/iscsi/iscsi.o 00:03:10.767 CC lib/iscsi/param.o 00:03:10.767 CC lib/iscsi/portal_grp.o 00:03:11.025 CC lib/iscsi/tgt_node.o 00:03:11.025 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:11.025 CC lib/iscsi/iscsi_subsystem.o 00:03:11.025 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:11.284 CC lib/iscsi/iscsi_rpc.o 00:03:11.284 CC lib/iscsi/task.o 00:03:11.284 CC lib/vhost/vhost.o 00:03:11.284 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:11.284 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:11.284 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:11.284 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:11.284 CC lib/vhost/vhost_rpc.o 00:03:11.284 CC lib/vhost/vhost_scsi.o 00:03:11.543 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:11.543 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:11.543 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:11.543 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:11.543 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:11.543 CC lib/vhost/vhost_blk.o 00:03:11.802 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:11.802 CC lib/ftl/utils/ftl_conf.o 00:03:11.802 CC lib/ftl/utils/ftl_md.o 00:03:12.060 LIB libspdk_nvmf.a 00:03:12.060 CC lib/ftl/utils/ftl_mempool.o 00:03:12.060 CC lib/vhost/rte_vhost_user.o 00:03:12.060 CC lib/ftl/utils/ftl_bitmap.o 00:03:12.060 SO libspdk_nvmf.so.20.0 00:03:12.060 CC lib/ftl/utils/ftl_property.o 00:03:12.060 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:12.320 LIB libspdk_iscsi.a 00:03:12.320 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:12.320 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:12.320 SO libspdk_iscsi.so.8.0 00:03:12.320 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:12.320 SYMLINK libspdk_nvmf.so 00:03:12.320 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:12.320 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:12.320 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:12.320 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:12.578 SYMLINK libspdk_iscsi.so 00:03:12.578 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:12.578 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:12.578 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:12.578 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:12.578 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:12.578 CC lib/ftl/base/ftl_base_dev.o 00:03:12.578 CC lib/ftl/base/ftl_base_bdev.o 00:03:12.578 CC lib/ftl/ftl_trace.o 00:03:12.837 LIB libspdk_ftl.a 00:03:13.095 SO libspdk_ftl.so.9.0 00:03:13.354 LIB libspdk_vhost.a 00:03:13.354 SO libspdk_vhost.so.8.0 00:03:13.354 SYMLINK libspdk_vhost.so 00:03:13.354 SYMLINK libspdk_ftl.so 00:03:13.921 CC module/env_dpdk/env_dpdk_rpc.o 00:03:13.921 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:13.921 CC module/accel/ioat/accel_ioat.o 00:03:13.921 CC module/accel/error/accel_error.o 00:03:13.921 CC module/scheduler/gscheduler/gscheduler.o 00:03:13.921 CC module/blob/bdev/blob_bdev.o 00:03:13.921 CC module/sock/posix/posix.o 00:03:13.921 CC module/keyring/file/keyring.o 00:03:13.921 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:13.921 CC module/fsdev/aio/fsdev_aio.o 00:03:13.921 LIB libspdk_env_dpdk_rpc.a 00:03:13.921 SO libspdk_env_dpdk_rpc.so.6.0 00:03:13.921 SYMLINK libspdk_env_dpdk_rpc.so 00:03:13.921 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:13.921 CC module/keyring/file/keyring_rpc.o 00:03:14.178 LIB libspdk_scheduler_gscheduler.a 00:03:14.178 LIB libspdk_scheduler_dpdk_governor.a 00:03:14.178 SO libspdk_scheduler_gscheduler.so.4.0 00:03:14.178 CC module/accel/error/accel_error_rpc.o 00:03:14.178 CC module/accel/ioat/accel_ioat_rpc.o 00:03:14.178 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:14.178 LIB libspdk_scheduler_dynamic.a 00:03:14.178 SYMLINK libspdk_scheduler_gscheduler.so 00:03:14.178 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:14.178 SO libspdk_scheduler_dynamic.so.4.0 00:03:14.178 LIB libspdk_blob_bdev.a 00:03:14.178 LIB libspdk_keyring_file.a 00:03:14.178 SO libspdk_blob_bdev.so.11.0 00:03:14.178 SYMLINK libspdk_scheduler_dynamic.so 00:03:14.178 CC module/fsdev/aio/linux_aio_mgr.o 00:03:14.178 SO libspdk_keyring_file.so.2.0 00:03:14.178 LIB libspdk_accel_error.a 00:03:14.178 SYMLINK libspdk_blob_bdev.so 00:03:14.179 LIB libspdk_accel_ioat.a 00:03:14.179 SO libspdk_accel_ioat.so.6.0 00:03:14.179 SO libspdk_accel_error.so.2.0 00:03:14.436 SYMLINK libspdk_keyring_file.so 00:03:14.436 CC module/accel/dsa/accel_dsa.o 00:03:14.436 CC module/sock/uring/uring.o 00:03:14.436 SYMLINK libspdk_accel_ioat.so 00:03:14.436 SYMLINK libspdk_accel_error.so 00:03:14.436 CC module/accel/dsa/accel_dsa_rpc.o 00:03:14.436 CC module/keyring/linux/keyring.o 00:03:14.436 CC module/keyring/linux/keyring_rpc.o 00:03:14.436 CC module/accel/iaa/accel_iaa.o 00:03:14.436 CC module/bdev/delay/vbdev_delay.o 00:03:14.694 CC module/accel/iaa/accel_iaa_rpc.o 00:03:14.694 LIB libspdk_fsdev_aio.a 00:03:14.694 LIB libspdk_keyring_linux.a 00:03:14.694 SO libspdk_keyring_linux.so.1.0 00:03:14.694 SO libspdk_fsdev_aio.so.1.0 00:03:14.694 LIB libspdk_accel_dsa.a 00:03:14.694 CC module/blobfs/bdev/blobfs_bdev.o 00:03:14.694 LIB libspdk_sock_posix.a 00:03:14.694 SO libspdk_accel_dsa.so.5.0 00:03:14.694 CC module/bdev/error/vbdev_error.o 00:03:14.694 SYMLINK libspdk_keyring_linux.so 00:03:14.694 SYMLINK libspdk_fsdev_aio.so 00:03:14.694 SO libspdk_sock_posix.so.6.0 00:03:14.694 SYMLINK libspdk_accel_dsa.so 00:03:14.694 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:14.694 LIB libspdk_accel_iaa.a 00:03:14.694 SYMLINK libspdk_sock_posix.so 00:03:14.694 CC module/bdev/error/vbdev_error_rpc.o 00:03:14.694 SO libspdk_accel_iaa.so.3.0 00:03:14.952 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:14.952 SYMLINK libspdk_accel_iaa.so 00:03:14.952 CC module/bdev/gpt/gpt.o 00:03:14.952 CC module/bdev/lvol/vbdev_lvol.o 00:03:14.952 CC module/bdev/malloc/bdev_malloc.o 00:03:14.952 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:14.952 LIB libspdk_bdev_delay.a 00:03:14.952 CC module/bdev/gpt/vbdev_gpt.o 00:03:14.952 LIB libspdk_bdev_error.a 00:03:14.952 SO libspdk_bdev_delay.so.6.0 00:03:14.952 SO libspdk_bdev_error.so.6.0 00:03:14.952 LIB libspdk_sock_uring.a 00:03:14.952 LIB libspdk_blobfs_bdev.a 00:03:14.952 CC module/bdev/null/bdev_null.o 00:03:14.952 SYMLINK libspdk_bdev_delay.so 00:03:14.952 SO libspdk_sock_uring.so.5.0 00:03:14.952 CC module/bdev/null/bdev_null_rpc.o 00:03:15.210 SO libspdk_blobfs_bdev.so.6.0 00:03:15.210 SYMLINK libspdk_bdev_error.so 00:03:15.210 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:15.210 SYMLINK libspdk_sock_uring.so 00:03:15.210 SYMLINK libspdk_blobfs_bdev.so 00:03:15.210 CC module/bdev/nvme/bdev_nvme.o 00:03:15.210 LIB libspdk_bdev_gpt.a 00:03:15.210 LIB libspdk_bdev_malloc.a 00:03:15.210 CC module/bdev/raid/bdev_raid.o 00:03:15.210 SO libspdk_bdev_gpt.so.6.0 00:03:15.210 CC module/bdev/passthru/vbdev_passthru.o 00:03:15.210 CC module/bdev/split/vbdev_split.o 00:03:15.210 SO libspdk_bdev_malloc.so.6.0 00:03:15.210 LIB libspdk_bdev_null.a 00:03:15.468 SO libspdk_bdev_null.so.6.0 00:03:15.468 SYMLINK libspdk_bdev_gpt.so 00:03:15.468 CC module/bdev/split/vbdev_split_rpc.o 00:03:15.468 SYMLINK libspdk_bdev_malloc.so 00:03:15.468 SYMLINK libspdk_bdev_null.so 00:03:15.468 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:15.468 LIB libspdk_bdev_lvol.a 00:03:15.468 SO libspdk_bdev_lvol.so.6.0 00:03:15.468 LIB libspdk_bdev_split.a 00:03:15.468 CC module/bdev/raid/bdev_raid_rpc.o 00:03:15.468 CC module/bdev/uring/bdev_uring.o 00:03:15.468 SYMLINK libspdk_bdev_lvol.so 00:03:15.468 CC module/bdev/uring/bdev_uring_rpc.o 00:03:15.468 CC module/bdev/aio/bdev_aio.o 00:03:15.726 CC module/bdev/ftl/bdev_ftl.o 00:03:15.726 SO libspdk_bdev_split.so.6.0 00:03:15.726 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:15.726 SYMLINK libspdk_bdev_split.so 00:03:15.726 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:15.726 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:15.726 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:15.726 CC module/bdev/nvme/nvme_rpc.o 00:03:15.726 LIB libspdk_bdev_passthru.a 00:03:15.726 SO libspdk_bdev_passthru.so.6.0 00:03:15.984 CC module/bdev/nvme/bdev_mdns_client.o 00:03:15.984 LIB libspdk_bdev_ftl.a 00:03:15.984 SYMLINK libspdk_bdev_passthru.so 00:03:15.984 LIB libspdk_bdev_uring.a 00:03:15.984 SO libspdk_bdev_ftl.so.6.0 00:03:15.984 CC module/bdev/aio/bdev_aio_rpc.o 00:03:15.984 LIB libspdk_bdev_zone_block.a 00:03:15.984 SO libspdk_bdev_uring.so.6.0 00:03:15.984 SO libspdk_bdev_zone_block.so.6.0 00:03:15.984 SYMLINK libspdk_bdev_ftl.so 00:03:15.984 CC module/bdev/raid/bdev_raid_sb.o 00:03:15.984 SYMLINK libspdk_bdev_uring.so 00:03:15.984 CC module/bdev/nvme/vbdev_opal.o 00:03:15.984 SYMLINK libspdk_bdev_zone_block.so 00:03:15.984 CC module/bdev/raid/raid0.o 00:03:15.984 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:15.984 CC module/bdev/iscsi/bdev_iscsi.o 00:03:16.242 LIB libspdk_bdev_aio.a 00:03:16.242 SO libspdk_bdev_aio.so.6.0 00:03:16.242 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:16.242 SYMLINK libspdk_bdev_aio.so 00:03:16.242 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:16.242 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:16.242 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:16.242 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:16.500 CC module/bdev/raid/raid1.o 00:03:16.500 CC module/bdev/raid/concat.o 00:03:16.500 LIB libspdk_bdev_iscsi.a 00:03:16.500 SO libspdk_bdev_iscsi.so.6.0 00:03:16.500 SYMLINK libspdk_bdev_iscsi.so 00:03:16.758 LIB libspdk_bdev_raid.a 00:03:16.758 LIB libspdk_bdev_virtio.a 00:03:16.758 SO libspdk_bdev_raid.so.6.0 00:03:16.758 SO libspdk_bdev_virtio.so.6.0 00:03:16.758 SYMLINK libspdk_bdev_raid.so 00:03:16.758 SYMLINK libspdk_bdev_virtio.so 00:03:18.133 LIB libspdk_bdev_nvme.a 00:03:18.133 SO libspdk_bdev_nvme.so.7.1 00:03:18.133 SYMLINK libspdk_bdev_nvme.so 00:03:18.700 CC module/event/subsystems/vmd/vmd.o 00:03:18.700 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:18.700 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:18.700 CC module/event/subsystems/scheduler/scheduler.o 00:03:18.700 CC module/event/subsystems/sock/sock.o 00:03:18.700 CC module/event/subsystems/keyring/keyring.o 00:03:18.700 CC module/event/subsystems/fsdev/fsdev.o 00:03:18.700 CC module/event/subsystems/iobuf/iobuf.o 00:03:18.700 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:18.959 LIB libspdk_event_vmd.a 00:03:18.959 LIB libspdk_event_keyring.a 00:03:18.959 LIB libspdk_event_vhost_blk.a 00:03:18.959 LIB libspdk_event_fsdev.a 00:03:18.959 LIB libspdk_event_scheduler.a 00:03:18.959 LIB libspdk_event_sock.a 00:03:18.959 SO libspdk_event_keyring.so.1.0 00:03:18.959 SO libspdk_event_vmd.so.6.0 00:03:18.959 SO libspdk_event_vhost_blk.so.3.0 00:03:18.959 SO libspdk_event_scheduler.so.4.0 00:03:18.959 LIB libspdk_event_iobuf.a 00:03:18.959 SO libspdk_event_fsdev.so.1.0 00:03:18.959 SO libspdk_event_sock.so.5.0 00:03:18.959 SYMLINK libspdk_event_vhost_blk.so 00:03:18.959 SO libspdk_event_iobuf.so.3.0 00:03:18.959 SYMLINK libspdk_event_keyring.so 00:03:18.959 SYMLINK libspdk_event_scheduler.so 00:03:18.959 SYMLINK libspdk_event_vmd.so 00:03:18.959 SYMLINK libspdk_event_fsdev.so 00:03:18.959 SYMLINK libspdk_event_sock.so 00:03:18.959 SYMLINK libspdk_event_iobuf.so 00:03:19.217 CC module/event/subsystems/accel/accel.o 00:03:19.476 LIB libspdk_event_accel.a 00:03:19.476 SO libspdk_event_accel.so.6.0 00:03:19.476 SYMLINK libspdk_event_accel.so 00:03:19.734 CC module/event/subsystems/bdev/bdev.o 00:03:19.993 LIB libspdk_event_bdev.a 00:03:19.993 SO libspdk_event_bdev.so.6.0 00:03:19.993 SYMLINK libspdk_event_bdev.so 00:03:20.251 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:20.251 CC module/event/subsystems/scsi/scsi.o 00:03:20.251 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:20.251 CC module/event/subsystems/ublk/ublk.o 00:03:20.251 CC module/event/subsystems/nbd/nbd.o 00:03:20.533 LIB libspdk_event_ublk.a 00:03:20.533 LIB libspdk_event_nbd.a 00:03:20.533 LIB libspdk_event_scsi.a 00:03:20.533 SO libspdk_event_nbd.so.6.0 00:03:20.533 SO libspdk_event_ublk.so.3.0 00:03:20.533 SO libspdk_event_scsi.so.6.0 00:03:20.533 SYMLINK libspdk_event_ublk.so 00:03:20.533 SYMLINK libspdk_event_nbd.so 00:03:20.533 SYMLINK libspdk_event_scsi.so 00:03:20.533 LIB libspdk_event_nvmf.a 00:03:20.533 SO libspdk_event_nvmf.so.6.0 00:03:20.791 SYMLINK libspdk_event_nvmf.so 00:03:20.791 CC module/event/subsystems/iscsi/iscsi.o 00:03:20.791 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:21.049 LIB libspdk_event_vhost_scsi.a 00:03:21.049 LIB libspdk_event_iscsi.a 00:03:21.049 SO libspdk_event_vhost_scsi.so.3.0 00:03:21.049 SO libspdk_event_iscsi.so.6.0 00:03:21.049 SYMLINK libspdk_event_vhost_scsi.so 00:03:21.049 SYMLINK libspdk_event_iscsi.so 00:03:21.308 SO libspdk.so.6.0 00:03:21.308 SYMLINK libspdk.so 00:03:21.566 CXX app/trace/trace.o 00:03:21.566 CC app/trace_record/trace_record.o 00:03:21.566 CC app/spdk_lspci/spdk_lspci.o 00:03:21.566 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:21.566 CC app/iscsi_tgt/iscsi_tgt.o 00:03:21.566 CC app/nvmf_tgt/nvmf_main.o 00:03:21.566 CC test/thread/poller_perf/poller_perf.o 00:03:21.566 CC examples/ioat/perf/perf.o 00:03:21.566 CC app/spdk_tgt/spdk_tgt.o 00:03:21.566 CC examples/util/zipf/zipf.o 00:03:21.823 LINK spdk_lspci 00:03:21.823 LINK interrupt_tgt 00:03:21.823 LINK poller_perf 00:03:21.823 LINK spdk_trace_record 00:03:21.823 LINK zipf 00:03:21.823 LINK nvmf_tgt 00:03:21.823 LINK iscsi_tgt 00:03:21.823 LINK spdk_tgt 00:03:21.823 LINK ioat_perf 00:03:22.081 CC examples/ioat/verify/verify.o 00:03:22.081 LINK spdk_trace 00:03:22.081 CC app/spdk_nvme_perf/perf.o 00:03:22.081 CC app/spdk_nvme_identify/identify.o 00:03:22.353 TEST_HEADER include/spdk/accel.h 00:03:22.353 TEST_HEADER include/spdk/accel_module.h 00:03:22.353 TEST_HEADER include/spdk/assert.h 00:03:22.353 TEST_HEADER include/spdk/barrier.h 00:03:22.353 TEST_HEADER include/spdk/base64.h 00:03:22.353 TEST_HEADER include/spdk/bdev.h 00:03:22.353 TEST_HEADER include/spdk/bdev_module.h 00:03:22.353 TEST_HEADER include/spdk/bdev_zone.h 00:03:22.353 TEST_HEADER include/spdk/bit_array.h 00:03:22.353 TEST_HEADER include/spdk/bit_pool.h 00:03:22.353 TEST_HEADER include/spdk/blob_bdev.h 00:03:22.353 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:22.353 TEST_HEADER include/spdk/blobfs.h 00:03:22.353 TEST_HEADER include/spdk/blob.h 00:03:22.353 CC test/dma/test_dma/test_dma.o 00:03:22.353 TEST_HEADER include/spdk/conf.h 00:03:22.353 CC test/app/bdev_svc/bdev_svc.o 00:03:22.353 TEST_HEADER include/spdk/config.h 00:03:22.353 TEST_HEADER include/spdk/cpuset.h 00:03:22.353 TEST_HEADER include/spdk/crc16.h 00:03:22.353 TEST_HEADER include/spdk/crc32.h 00:03:22.353 LINK verify 00:03:22.353 TEST_HEADER include/spdk/crc64.h 00:03:22.353 TEST_HEADER include/spdk/dif.h 00:03:22.353 TEST_HEADER include/spdk/dma.h 00:03:22.353 TEST_HEADER include/spdk/endian.h 00:03:22.353 TEST_HEADER include/spdk/env_dpdk.h 00:03:22.353 TEST_HEADER include/spdk/env.h 00:03:22.353 TEST_HEADER include/spdk/event.h 00:03:22.353 TEST_HEADER include/spdk/fd_group.h 00:03:22.353 TEST_HEADER include/spdk/fd.h 00:03:22.353 TEST_HEADER include/spdk/file.h 00:03:22.353 TEST_HEADER include/spdk/fsdev.h 00:03:22.353 TEST_HEADER include/spdk/fsdev_module.h 00:03:22.353 TEST_HEADER include/spdk/ftl.h 00:03:22.353 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:22.353 TEST_HEADER include/spdk/gpt_spec.h 00:03:22.353 TEST_HEADER include/spdk/hexlify.h 00:03:22.353 TEST_HEADER include/spdk/histogram_data.h 00:03:22.353 TEST_HEADER include/spdk/idxd.h 00:03:22.353 TEST_HEADER include/spdk/idxd_spec.h 00:03:22.353 TEST_HEADER include/spdk/init.h 00:03:22.353 TEST_HEADER include/spdk/ioat.h 00:03:22.353 TEST_HEADER include/spdk/ioat_spec.h 00:03:22.353 TEST_HEADER include/spdk/iscsi_spec.h 00:03:22.353 TEST_HEADER include/spdk/json.h 00:03:22.353 CC examples/sock/hello_world/hello_sock.o 00:03:22.353 CC app/spdk_nvme_discover/discovery_aer.o 00:03:22.353 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:22.353 TEST_HEADER include/spdk/jsonrpc.h 00:03:22.353 TEST_HEADER include/spdk/keyring.h 00:03:22.353 TEST_HEADER include/spdk/keyring_module.h 00:03:22.353 TEST_HEADER include/spdk/likely.h 00:03:22.353 TEST_HEADER include/spdk/log.h 00:03:22.353 TEST_HEADER include/spdk/lvol.h 00:03:22.353 TEST_HEADER include/spdk/md5.h 00:03:22.353 TEST_HEADER include/spdk/memory.h 00:03:22.353 TEST_HEADER include/spdk/mmio.h 00:03:22.353 CC examples/thread/thread/thread_ex.o 00:03:22.353 TEST_HEADER include/spdk/nbd.h 00:03:22.353 TEST_HEADER include/spdk/net.h 00:03:22.353 TEST_HEADER include/spdk/notify.h 00:03:22.353 TEST_HEADER include/spdk/nvme.h 00:03:22.353 TEST_HEADER include/spdk/nvme_intel.h 00:03:22.353 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:22.353 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:22.353 TEST_HEADER include/spdk/nvme_spec.h 00:03:22.353 TEST_HEADER include/spdk/nvme_zns.h 00:03:22.353 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:22.353 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:22.353 TEST_HEADER include/spdk/nvmf.h 00:03:22.353 TEST_HEADER include/spdk/nvmf_spec.h 00:03:22.353 TEST_HEADER include/spdk/nvmf_transport.h 00:03:22.353 TEST_HEADER include/spdk/opal.h 00:03:22.353 TEST_HEADER include/spdk/opal_spec.h 00:03:22.353 TEST_HEADER include/spdk/pci_ids.h 00:03:22.353 TEST_HEADER include/spdk/pipe.h 00:03:22.353 TEST_HEADER include/spdk/queue.h 00:03:22.353 TEST_HEADER include/spdk/reduce.h 00:03:22.353 TEST_HEADER include/spdk/rpc.h 00:03:22.353 TEST_HEADER include/spdk/scheduler.h 00:03:22.353 TEST_HEADER include/spdk/scsi.h 00:03:22.353 TEST_HEADER include/spdk/scsi_spec.h 00:03:22.353 TEST_HEADER include/spdk/sock.h 00:03:22.353 TEST_HEADER include/spdk/stdinc.h 00:03:22.353 TEST_HEADER include/spdk/string.h 00:03:22.353 TEST_HEADER include/spdk/thread.h 00:03:22.353 TEST_HEADER include/spdk/trace.h 00:03:22.353 TEST_HEADER include/spdk/trace_parser.h 00:03:22.353 TEST_HEADER include/spdk/tree.h 00:03:22.353 TEST_HEADER include/spdk/ublk.h 00:03:22.353 TEST_HEADER include/spdk/util.h 00:03:22.353 TEST_HEADER include/spdk/uuid.h 00:03:22.353 TEST_HEADER include/spdk/version.h 00:03:22.353 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:22.353 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:22.631 TEST_HEADER include/spdk/vhost.h 00:03:22.631 TEST_HEADER include/spdk/vmd.h 00:03:22.631 LINK bdev_svc 00:03:22.631 TEST_HEADER include/spdk/xor.h 00:03:22.631 TEST_HEADER include/spdk/zipf.h 00:03:22.631 CXX test/cpp_headers/accel.o 00:03:22.631 CC test/app/histogram_perf/histogram_perf.o 00:03:22.631 LINK spdk_nvme_discover 00:03:22.631 LINK hello_sock 00:03:22.631 LINK thread 00:03:22.631 CXX test/cpp_headers/accel_module.o 00:03:22.631 LINK histogram_perf 00:03:22.631 CXX test/cpp_headers/assert.o 00:03:22.631 CXX test/cpp_headers/barrier.o 00:03:22.632 LINK nvme_fuzz 00:03:22.890 CXX test/cpp_headers/base64.o 00:03:22.890 LINK test_dma 00:03:22.890 CXX test/cpp_headers/bdev.o 00:03:22.890 CXX test/cpp_headers/bdev_module.o 00:03:22.890 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:22.890 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:22.890 CC examples/idxd/perf/perf.o 00:03:23.148 LINK spdk_nvme_perf 00:03:23.148 CC examples/vmd/lsvmd/lsvmd.o 00:03:23.148 CC examples/vmd/led/led.o 00:03:23.148 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:23.148 CC app/spdk_top/spdk_top.o 00:03:23.148 LINK spdk_nvme_identify 00:03:23.148 CXX test/cpp_headers/bdev_zone.o 00:03:23.148 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:23.148 CXX test/cpp_headers/bit_array.o 00:03:23.148 LINK lsvmd 00:03:23.148 LINK led 00:03:23.407 CXX test/cpp_headers/bit_pool.o 00:03:23.407 CXX test/cpp_headers/blob_bdev.o 00:03:23.407 LINK idxd_perf 00:03:23.407 CXX test/cpp_headers/blobfs_bdev.o 00:03:23.407 LINK hello_fsdev 00:03:23.407 CC test/app/jsoncat/jsoncat.o 00:03:23.665 CC examples/accel/perf/accel_perf.o 00:03:23.665 CC test/app/stub/stub.o 00:03:23.665 LINK vhost_fuzz 00:03:23.665 LINK jsoncat 00:03:23.665 CXX test/cpp_headers/blobfs.o 00:03:23.665 CC examples/blob/hello_world/hello_blob.o 00:03:23.665 CC examples/nvme/hello_world/hello_world.o 00:03:23.665 LINK stub 00:03:23.665 CXX test/cpp_headers/blob.o 00:03:23.924 CC app/spdk_dd/spdk_dd.o 00:03:23.924 CC app/vhost/vhost.o 00:03:23.924 CC test/env/mem_callbacks/mem_callbacks.o 00:03:23.924 CXX test/cpp_headers/conf.o 00:03:23.924 LINK hello_world 00:03:23.924 LINK hello_blob 00:03:23.924 LINK spdk_top 00:03:23.924 LINK accel_perf 00:03:23.924 CC examples/blob/cli/blobcli.o 00:03:24.182 LINK vhost 00:03:24.182 CXX test/cpp_headers/config.o 00:03:24.182 CXX test/cpp_headers/cpuset.o 00:03:24.182 CC examples/nvme/reconnect/reconnect.o 00:03:24.182 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:24.440 LINK spdk_dd 00:03:24.440 CXX test/cpp_headers/crc16.o 00:03:24.440 CC app/fio/nvme/fio_plugin.o 00:03:24.440 CC test/event/event_perf/event_perf.o 00:03:24.440 LINK mem_callbacks 00:03:24.440 CC examples/bdev/hello_world/hello_bdev.o 00:03:24.440 CXX test/cpp_headers/crc32.o 00:03:24.440 LINK blobcli 00:03:24.440 LINK event_perf 00:03:24.699 LINK reconnect 00:03:24.699 LINK iscsi_fuzz 00:03:24.699 CC examples/bdev/bdevperf/bdevperf.o 00:03:24.699 CC test/env/vtophys/vtophys.o 00:03:24.699 CXX test/cpp_headers/crc64.o 00:03:24.699 LINK nvme_manage 00:03:24.699 LINK hello_bdev 00:03:24.699 CC test/event/reactor/reactor.o 00:03:24.958 LINK vtophys 00:03:24.958 CC examples/nvme/arbitration/arbitration.o 00:03:24.958 CC app/fio/bdev/fio_plugin.o 00:03:24.958 CXX test/cpp_headers/dif.o 00:03:24.958 LINK spdk_nvme 00:03:24.958 CC test/event/reactor_perf/reactor_perf.o 00:03:24.958 LINK reactor 00:03:24.958 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:24.958 CXX test/cpp_headers/dma.o 00:03:24.958 CC test/env/pci/pci_ut.o 00:03:25.217 CC test/env/memory/memory_ut.o 00:03:25.217 LINK reactor_perf 00:03:25.217 CC test/nvme/aer/aer.o 00:03:25.217 CC test/rpc_client/rpc_client_test.o 00:03:25.217 LINK arbitration 00:03:25.217 LINK env_dpdk_post_init 00:03:25.217 CXX test/cpp_headers/endian.o 00:03:25.217 CC test/event/app_repeat/app_repeat.o 00:03:25.475 LINK spdk_bdev 00:03:25.475 CXX test/cpp_headers/env_dpdk.o 00:03:25.475 LINK rpc_client_test 00:03:25.475 LINK aer 00:03:25.475 CC examples/nvme/hotplug/hotplug.o 00:03:25.475 LINK pci_ut 00:03:25.475 LINK bdevperf 00:03:25.475 LINK app_repeat 00:03:25.475 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:25.475 CXX test/cpp_headers/env.o 00:03:25.733 CC examples/nvme/abort/abort.o 00:03:25.733 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:25.733 CC test/nvme/reset/reset.o 00:03:25.733 CXX test/cpp_headers/event.o 00:03:25.733 LINK cmb_copy 00:03:25.733 LINK hotplug 00:03:25.733 CXX test/cpp_headers/fd_group.o 00:03:25.733 CC test/event/scheduler/scheduler.o 00:03:25.992 LINK pmr_persistence 00:03:25.992 CC test/accel/dif/dif.o 00:03:25.992 CXX test/cpp_headers/fd.o 00:03:25.992 CC test/nvme/sgl/sgl.o 00:03:25.992 LINK reset 00:03:25.992 LINK abort 00:03:25.992 CXX test/cpp_headers/file.o 00:03:25.992 LINK scheduler 00:03:26.250 CC test/blobfs/mkfs/mkfs.o 00:03:26.250 CC test/lvol/esnap/esnap.o 00:03:26.250 CXX test/cpp_headers/fsdev.o 00:03:26.250 CC test/nvme/overhead/overhead.o 00:03:26.250 CC test/nvme/e2edp/nvme_dp.o 00:03:26.250 LINK sgl 00:03:26.250 CXX test/cpp_headers/fsdev_module.o 00:03:26.250 LINK memory_ut 00:03:26.250 LINK mkfs 00:03:26.508 CC examples/nvmf/nvmf/nvmf.o 00:03:26.508 CXX test/cpp_headers/ftl.o 00:03:26.508 CC test/nvme/err_injection/err_injection.o 00:03:26.508 CC test/nvme/startup/startup.o 00:03:26.508 LINK nvme_dp 00:03:26.508 LINK overhead 00:03:26.508 CC test/nvme/reserve/reserve.o 00:03:26.508 LINK dif 00:03:26.508 CC test/nvme/simple_copy/simple_copy.o 00:03:26.766 CXX test/cpp_headers/fuse_dispatcher.o 00:03:26.766 CXX test/cpp_headers/gpt_spec.o 00:03:26.766 LINK err_injection 00:03:26.766 LINK startup 00:03:26.766 LINK nvmf 00:03:26.766 LINK reserve 00:03:26.766 CC test/nvme/connect_stress/connect_stress.o 00:03:26.766 LINK simple_copy 00:03:26.766 CXX test/cpp_headers/hexlify.o 00:03:27.025 CC test/nvme/boot_partition/boot_partition.o 00:03:27.025 CC test/nvme/fused_ordering/fused_ordering.o 00:03:27.025 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:27.025 CC test/nvme/compliance/nvme_compliance.o 00:03:27.025 CXX test/cpp_headers/histogram_data.o 00:03:27.025 LINK connect_stress 00:03:27.025 LINK boot_partition 00:03:27.025 CC test/nvme/fdp/fdp.o 00:03:27.025 CC test/nvme/cuse/cuse.o 00:03:27.283 LINK doorbell_aers 00:03:27.283 LINK fused_ordering 00:03:27.283 CXX test/cpp_headers/idxd.o 00:03:27.283 CXX test/cpp_headers/idxd_spec.o 00:03:27.283 CC test/bdev/bdevio/bdevio.o 00:03:27.283 CXX test/cpp_headers/init.o 00:03:27.283 LINK nvme_compliance 00:03:27.283 CXX test/cpp_headers/ioat.o 00:03:27.283 CXX test/cpp_headers/ioat_spec.o 00:03:27.283 CXX test/cpp_headers/iscsi_spec.o 00:03:27.283 CXX test/cpp_headers/json.o 00:03:27.541 CXX test/cpp_headers/jsonrpc.o 00:03:27.541 CXX test/cpp_headers/keyring.o 00:03:27.541 CXX test/cpp_headers/keyring_module.o 00:03:27.541 CXX test/cpp_headers/likely.o 00:03:27.541 LINK fdp 00:03:27.541 CXX test/cpp_headers/log.o 00:03:27.541 CXX test/cpp_headers/lvol.o 00:03:27.541 LINK bdevio 00:03:27.541 CXX test/cpp_headers/md5.o 00:03:27.541 CXX test/cpp_headers/memory.o 00:03:27.541 CXX test/cpp_headers/mmio.o 00:03:27.541 CXX test/cpp_headers/nbd.o 00:03:27.541 CXX test/cpp_headers/net.o 00:03:27.799 CXX test/cpp_headers/notify.o 00:03:27.799 CXX test/cpp_headers/nvme.o 00:03:27.799 CXX test/cpp_headers/nvme_intel.o 00:03:27.799 CXX test/cpp_headers/nvme_ocssd.o 00:03:27.799 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:27.799 CXX test/cpp_headers/nvme_spec.o 00:03:27.799 CXX test/cpp_headers/nvme_zns.o 00:03:27.799 CXX test/cpp_headers/nvmf_cmd.o 00:03:27.799 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:27.799 CXX test/cpp_headers/nvmf.o 00:03:27.799 CXX test/cpp_headers/nvmf_spec.o 00:03:28.057 CXX test/cpp_headers/nvmf_transport.o 00:03:28.057 CXX test/cpp_headers/opal.o 00:03:28.057 CXX test/cpp_headers/opal_spec.o 00:03:28.057 CXX test/cpp_headers/pci_ids.o 00:03:28.057 CXX test/cpp_headers/pipe.o 00:03:28.057 CXX test/cpp_headers/queue.o 00:03:28.057 CXX test/cpp_headers/reduce.o 00:03:28.057 CXX test/cpp_headers/rpc.o 00:03:28.057 CXX test/cpp_headers/scheduler.o 00:03:28.057 CXX test/cpp_headers/scsi.o 00:03:28.315 CXX test/cpp_headers/scsi_spec.o 00:03:28.315 CXX test/cpp_headers/sock.o 00:03:28.315 CXX test/cpp_headers/stdinc.o 00:03:28.315 CXX test/cpp_headers/string.o 00:03:28.315 CXX test/cpp_headers/thread.o 00:03:28.315 CXX test/cpp_headers/trace.o 00:03:28.315 CXX test/cpp_headers/trace_parser.o 00:03:28.315 CXX test/cpp_headers/tree.o 00:03:28.315 CXX test/cpp_headers/ublk.o 00:03:28.315 CXX test/cpp_headers/util.o 00:03:28.315 CXX test/cpp_headers/uuid.o 00:03:28.315 CXX test/cpp_headers/version.o 00:03:28.315 CXX test/cpp_headers/vfio_user_pci.o 00:03:28.315 CXX test/cpp_headers/vfio_user_spec.o 00:03:28.315 CXX test/cpp_headers/vhost.o 00:03:28.573 CXX test/cpp_headers/vmd.o 00:03:28.573 CXX test/cpp_headers/xor.o 00:03:28.573 LINK cuse 00:03:28.573 CXX test/cpp_headers/zipf.o 00:03:31.855 LINK esnap 00:03:31.855 00:03:31.855 real 1m29.213s 00:03:31.855 user 8m7.525s 00:03:31.855 sys 1m37.576s 00:03:31.855 10:21:32 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:03:31.855 10:21:32 make -- common/autotest_common.sh@10 -- $ set +x 00:03:31.855 ************************************ 00:03:31.855 END TEST make 00:03:31.855 ************************************ 00:03:31.855 10:21:32 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:31.855 10:21:32 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:31.855 10:21:32 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:31.855 10:21:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:31.855 10:21:32 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:31.855 10:21:32 -- pm/common@44 -- $ pid=5242 00:03:31.855 10:21:32 -- pm/common@50 -- $ kill -TERM 5242 00:03:31.855 10:21:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:31.855 10:21:32 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:31.855 10:21:32 -- pm/common@44 -- $ pid=5244 00:03:31.855 10:21:32 -- pm/common@50 -- $ kill -TERM 5244 00:03:31.855 10:21:32 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:31.855 10:21:32 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:32.113 10:21:32 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:32.113 10:21:32 -- common/autotest_common.sh@1691 -- # lcov --version 00:03:32.113 10:21:32 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:32.113 10:21:32 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:32.113 10:21:32 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:32.113 10:21:32 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:32.113 10:21:32 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:32.113 10:21:32 -- scripts/common.sh@336 -- # IFS=.-: 00:03:32.113 10:21:32 -- scripts/common.sh@336 -- # read -ra ver1 00:03:32.113 10:21:32 -- scripts/common.sh@337 -- # IFS=.-: 00:03:32.113 10:21:32 -- scripts/common.sh@337 -- # read -ra ver2 00:03:32.113 10:21:32 -- scripts/common.sh@338 -- # local 'op=<' 00:03:32.113 10:21:32 -- scripts/common.sh@340 -- # ver1_l=2 00:03:32.113 10:21:32 -- scripts/common.sh@341 -- # ver2_l=1 00:03:32.113 10:21:32 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:32.113 10:21:32 -- scripts/common.sh@344 -- # case "$op" in 00:03:32.113 10:21:32 -- scripts/common.sh@345 -- # : 1 00:03:32.113 10:21:32 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:32.113 10:21:32 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:32.113 10:21:32 -- scripts/common.sh@365 -- # decimal 1 00:03:32.113 10:21:32 -- scripts/common.sh@353 -- # local d=1 00:03:32.113 10:21:32 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:32.113 10:21:32 -- scripts/common.sh@355 -- # echo 1 00:03:32.113 10:21:32 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:32.113 10:21:32 -- scripts/common.sh@366 -- # decimal 2 00:03:32.113 10:21:32 -- scripts/common.sh@353 -- # local d=2 00:03:32.113 10:21:32 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:32.114 10:21:32 -- scripts/common.sh@355 -- # echo 2 00:03:32.114 10:21:32 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:32.114 10:21:32 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:32.114 10:21:32 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:32.114 10:21:32 -- scripts/common.sh@368 -- # return 0 00:03:32.114 10:21:32 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:32.114 10:21:32 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:32.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:32.114 --rc genhtml_branch_coverage=1 00:03:32.114 --rc genhtml_function_coverage=1 00:03:32.114 --rc genhtml_legend=1 00:03:32.114 --rc geninfo_all_blocks=1 00:03:32.114 --rc geninfo_unexecuted_blocks=1 00:03:32.114 00:03:32.114 ' 00:03:32.114 10:21:32 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:32.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:32.114 --rc genhtml_branch_coverage=1 00:03:32.114 --rc genhtml_function_coverage=1 00:03:32.114 --rc genhtml_legend=1 00:03:32.114 --rc geninfo_all_blocks=1 00:03:32.114 --rc geninfo_unexecuted_blocks=1 00:03:32.114 00:03:32.114 ' 00:03:32.114 10:21:32 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:32.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:32.114 --rc genhtml_branch_coverage=1 00:03:32.114 --rc genhtml_function_coverage=1 00:03:32.114 --rc genhtml_legend=1 00:03:32.114 --rc geninfo_all_blocks=1 00:03:32.114 --rc geninfo_unexecuted_blocks=1 00:03:32.114 00:03:32.114 ' 00:03:32.114 10:21:32 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:32.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:32.114 --rc genhtml_branch_coverage=1 00:03:32.114 --rc genhtml_function_coverage=1 00:03:32.114 --rc genhtml_legend=1 00:03:32.114 --rc geninfo_all_blocks=1 00:03:32.114 --rc geninfo_unexecuted_blocks=1 00:03:32.114 00:03:32.114 ' 00:03:32.114 10:21:32 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:32.114 10:21:32 -- nvmf/common.sh@7 -- # uname -s 00:03:32.114 10:21:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:32.114 10:21:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:32.114 10:21:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:32.114 10:21:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:32.114 10:21:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:32.114 10:21:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:32.114 10:21:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:32.114 10:21:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:32.114 10:21:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:32.114 10:21:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:32.114 10:21:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:03:32.114 10:21:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:03:32.114 10:21:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:32.114 10:21:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:32.114 10:21:32 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:03:32.114 10:21:32 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:32.114 10:21:32 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:32.114 10:21:32 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:32.114 10:21:32 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:32.114 10:21:32 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:32.114 10:21:32 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:32.114 10:21:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:32.114 10:21:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:32.114 10:21:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:32.114 10:21:32 -- paths/export.sh@5 -- # export PATH 00:03:32.114 10:21:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:32.114 10:21:32 -- nvmf/common.sh@51 -- # : 0 00:03:32.114 10:21:32 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:32.114 10:21:32 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:32.114 10:21:32 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:32.114 10:21:32 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:32.114 10:21:32 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:32.114 10:21:32 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:32.114 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:32.114 10:21:32 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:32.114 10:21:32 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:32.114 10:21:32 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:32.114 10:21:32 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:32.114 10:21:32 -- spdk/autotest.sh@32 -- # uname -s 00:03:32.114 10:21:32 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:32.114 10:21:32 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:32.114 10:21:32 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:32.114 10:21:32 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:32.114 10:21:32 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:32.114 10:21:32 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:32.114 10:21:32 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:32.114 10:21:32 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:32.114 10:21:32 -- spdk/autotest.sh@48 -- # udevadm_pid=54344 00:03:32.114 10:21:32 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:32.114 10:21:32 -- pm/common@17 -- # local monitor 00:03:32.114 10:21:32 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:32.114 10:21:32 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:32.114 10:21:32 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:32.114 10:21:32 -- pm/common@25 -- # sleep 1 00:03:32.114 10:21:32 -- pm/common@21 -- # date +%s 00:03:32.114 10:21:32 -- pm/common@21 -- # date +%s 00:03:32.114 10:21:32 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731666092 00:03:32.114 10:21:32 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731666092 00:03:32.114 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731666092_collect-cpu-load.pm.log 00:03:32.114 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731666092_collect-vmstat.pm.log 00:03:33.488 10:21:33 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:33.488 10:21:33 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:33.488 10:21:33 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:33.488 10:21:33 -- common/autotest_common.sh@10 -- # set +x 00:03:33.488 10:21:33 -- spdk/autotest.sh@59 -- # create_test_list 00:03:33.488 10:21:33 -- common/autotest_common.sh@750 -- # xtrace_disable 00:03:33.488 10:21:33 -- common/autotest_common.sh@10 -- # set +x 00:03:33.488 10:21:33 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:33.488 10:21:33 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:33.488 10:21:33 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:33.488 10:21:33 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:33.488 10:21:33 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:33.488 10:21:33 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:33.488 10:21:33 -- common/autotest_common.sh@1455 -- # uname 00:03:33.488 10:21:33 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:33.488 10:21:33 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:33.488 10:21:33 -- common/autotest_common.sh@1475 -- # uname 00:03:33.488 10:21:33 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:33.488 10:21:33 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:33.488 10:21:33 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:33.488 lcov: LCOV version 1.15 00:03:33.488 10:21:34 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:51.568 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:51.568 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:06.446 10:22:06 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:06.446 10:22:06 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:06.446 10:22:06 -- common/autotest_common.sh@10 -- # set +x 00:04:06.446 10:22:06 -- spdk/autotest.sh@78 -- # rm -f 00:04:06.447 10:22:06 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:06.447 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:06.447 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:06.447 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:06.447 10:22:07 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:06.447 10:22:07 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:06.447 10:22:07 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:06.447 10:22:07 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:06.447 10:22:07 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:06.447 10:22:07 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:06.447 10:22:07 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:06.447 10:22:07 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:06.447 10:22:07 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:06.447 10:22:07 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:06.447 10:22:07 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:04:06.447 10:22:07 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:04:06.447 10:22:07 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:06.447 10:22:07 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:06.447 10:22:07 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:06.447 10:22:07 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:04:06.447 10:22:07 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:04:06.447 10:22:07 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:06.447 10:22:07 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:06.447 10:22:07 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:06.447 10:22:07 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:04:06.447 10:22:07 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:04:06.447 10:22:07 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:06.447 10:22:07 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:06.447 10:22:07 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:06.447 10:22:07 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:06.447 10:22:07 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:06.447 10:22:07 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:06.447 10:22:07 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:06.447 10:22:07 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:06.447 No valid GPT data, bailing 00:04:06.447 10:22:07 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:06.447 10:22:07 -- scripts/common.sh@394 -- # pt= 00:04:06.447 10:22:07 -- scripts/common.sh@395 -- # return 1 00:04:06.447 10:22:07 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:06.447 1+0 records in 00:04:06.447 1+0 records out 00:04:06.447 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00517238 s, 203 MB/s 00:04:06.447 10:22:07 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:06.447 10:22:07 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:06.447 10:22:07 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:06.447 10:22:07 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:06.447 10:22:07 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:06.447 No valid GPT data, bailing 00:04:06.447 10:22:07 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:06.706 10:22:07 -- scripts/common.sh@394 -- # pt= 00:04:06.706 10:22:07 -- scripts/common.sh@395 -- # return 1 00:04:06.706 10:22:07 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:06.706 1+0 records in 00:04:06.706 1+0 records out 00:04:06.706 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00520957 s, 201 MB/s 00:04:06.706 10:22:07 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:06.706 10:22:07 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:06.706 10:22:07 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:04:06.706 10:22:07 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:04:06.706 10:22:07 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:06.706 No valid GPT data, bailing 00:04:06.706 10:22:07 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:06.706 10:22:07 -- scripts/common.sh@394 -- # pt= 00:04:06.706 10:22:07 -- scripts/common.sh@395 -- # return 1 00:04:06.706 10:22:07 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:06.706 1+0 records in 00:04:06.706 1+0 records out 00:04:06.706 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00469293 s, 223 MB/s 00:04:06.706 10:22:07 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:06.706 10:22:07 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:06.706 10:22:07 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:04:06.706 10:22:07 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:04:06.706 10:22:07 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:06.706 No valid GPT data, bailing 00:04:06.706 10:22:07 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:06.706 10:22:07 -- scripts/common.sh@394 -- # pt= 00:04:06.706 10:22:07 -- scripts/common.sh@395 -- # return 1 00:04:06.706 10:22:07 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:06.706 1+0 records in 00:04:06.706 1+0 records out 00:04:06.706 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00426153 s, 246 MB/s 00:04:06.706 10:22:07 -- spdk/autotest.sh@105 -- # sync 00:04:06.706 10:22:07 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:06.706 10:22:07 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:06.706 10:22:07 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:08.611 10:22:09 -- spdk/autotest.sh@111 -- # uname -s 00:04:08.611 10:22:09 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:08.611 10:22:09 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:08.611 10:22:09 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:09.546 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:09.546 Hugepages 00:04:09.546 node hugesize free / total 00:04:09.546 node0 1048576kB 0 / 0 00:04:09.546 node0 2048kB 0 / 0 00:04:09.546 00:04:09.546 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:09.546 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:09.546 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:09.546 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:09.546 10:22:10 -- spdk/autotest.sh@117 -- # uname -s 00:04:09.546 10:22:10 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:09.546 10:22:10 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:09.546 10:22:10 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:10.112 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:10.369 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:10.369 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:10.370 10:22:11 -- common/autotest_common.sh@1515 -- # sleep 1 00:04:11.744 10:22:12 -- common/autotest_common.sh@1516 -- # bdfs=() 00:04:11.744 10:22:12 -- common/autotest_common.sh@1516 -- # local bdfs 00:04:11.744 10:22:12 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:04:11.744 10:22:12 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:04:11.744 10:22:12 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:11.744 10:22:12 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:11.744 10:22:12 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:11.744 10:22:12 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:11.744 10:22:12 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:11.744 10:22:12 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:04:11.744 10:22:12 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:11.744 10:22:12 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:11.744 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:11.744 Waiting for block devices as requested 00:04:12.002 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:12.002 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:12.002 10:22:12 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:12.002 10:22:12 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:12.002 10:22:12 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:04:12.002 10:22:12 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:12.002 10:22:12 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:12.002 10:22:12 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:12.002 10:22:12 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:12.002 10:22:12 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:04:12.002 10:22:12 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:04:12.002 10:22:12 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:04:12.002 10:22:12 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:04:12.002 10:22:12 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:12.002 10:22:12 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:12.002 10:22:12 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:12.002 10:22:12 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:12.002 10:22:12 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:12.002 10:22:12 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:12.002 10:22:12 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:04:12.002 10:22:12 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:12.002 10:22:12 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:12.002 10:22:12 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:12.002 10:22:12 -- common/autotest_common.sh@1541 -- # continue 00:04:12.002 10:22:12 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:12.002 10:22:12 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:12.002 10:22:12 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:12.002 10:22:12 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:04:12.002 10:22:12 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:12.002 10:22:12 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:12.002 10:22:12 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:12.002 10:22:12 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:12.002 10:22:12 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:04:12.002 10:22:12 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:04:12.002 10:22:12 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:04:12.002 10:22:12 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:12.002 10:22:12 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:12.261 10:22:12 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:12.261 10:22:12 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:12.261 10:22:12 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:12.261 10:22:12 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:04:12.261 10:22:12 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:12.261 10:22:12 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:12.261 10:22:12 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:12.261 10:22:12 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:12.261 10:22:12 -- common/autotest_common.sh@1541 -- # continue 00:04:12.261 10:22:12 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:12.261 10:22:12 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:12.261 10:22:12 -- common/autotest_common.sh@10 -- # set +x 00:04:12.261 10:22:12 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:12.261 10:22:12 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:12.262 10:22:12 -- common/autotest_common.sh@10 -- # set +x 00:04:12.262 10:22:12 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:12.827 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:12.827 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:13.085 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:13.085 10:22:13 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:13.085 10:22:13 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:13.085 10:22:13 -- common/autotest_common.sh@10 -- # set +x 00:04:13.085 10:22:13 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:13.085 10:22:13 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:13.085 10:22:13 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:13.085 10:22:13 -- common/autotest_common.sh@1561 -- # bdfs=() 00:04:13.085 10:22:13 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:04:13.085 10:22:13 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:04:13.085 10:22:13 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:04:13.085 10:22:13 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:13.085 10:22:13 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:13.085 10:22:13 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:13.085 10:22:13 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:13.085 10:22:13 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:13.085 10:22:13 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:13.085 10:22:13 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:04:13.085 10:22:13 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:13.085 10:22:13 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:13.085 10:22:13 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:13.085 10:22:13 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:13.085 10:22:13 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:13.085 10:22:13 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:13.085 10:22:13 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:13.085 10:22:13 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:13.085 10:22:13 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:13.085 10:22:13 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:04:13.085 10:22:13 -- common/autotest_common.sh@1570 -- # return 0 00:04:13.085 10:22:13 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:04:13.085 10:22:13 -- common/autotest_common.sh@1578 -- # return 0 00:04:13.085 10:22:13 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:13.085 10:22:13 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:13.085 10:22:13 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:13.085 10:22:13 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:13.085 10:22:13 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:13.085 10:22:13 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:13.085 10:22:13 -- common/autotest_common.sh@10 -- # set +x 00:04:13.085 10:22:13 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:04:13.085 10:22:13 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:04:13.085 10:22:13 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:04:13.085 10:22:13 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:13.085 10:22:13 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:13.085 10:22:13 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:13.085 10:22:13 -- common/autotest_common.sh@10 -- # set +x 00:04:13.085 ************************************ 00:04:13.085 START TEST env 00:04:13.085 ************************************ 00:04:13.085 10:22:13 env -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:13.343 * Looking for test storage... 00:04:13.343 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:13.343 10:22:13 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:13.343 10:22:13 env -- common/autotest_common.sh@1691 -- # lcov --version 00:04:13.343 10:22:13 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:13.343 10:22:14 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:13.343 10:22:14 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:13.343 10:22:14 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:13.343 10:22:14 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:13.343 10:22:14 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:13.343 10:22:14 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:13.343 10:22:14 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:13.343 10:22:14 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:13.343 10:22:14 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:13.343 10:22:14 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:13.343 10:22:14 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:13.343 10:22:14 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:13.343 10:22:14 env -- scripts/common.sh@344 -- # case "$op" in 00:04:13.343 10:22:14 env -- scripts/common.sh@345 -- # : 1 00:04:13.343 10:22:14 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:13.343 10:22:14 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:13.343 10:22:14 env -- scripts/common.sh@365 -- # decimal 1 00:04:13.343 10:22:14 env -- scripts/common.sh@353 -- # local d=1 00:04:13.343 10:22:14 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:13.343 10:22:14 env -- scripts/common.sh@355 -- # echo 1 00:04:13.343 10:22:14 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:13.343 10:22:14 env -- scripts/common.sh@366 -- # decimal 2 00:04:13.343 10:22:14 env -- scripts/common.sh@353 -- # local d=2 00:04:13.343 10:22:14 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:13.343 10:22:14 env -- scripts/common.sh@355 -- # echo 2 00:04:13.343 10:22:14 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:13.343 10:22:14 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:13.343 10:22:14 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:13.343 10:22:14 env -- scripts/common.sh@368 -- # return 0 00:04:13.343 10:22:14 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:13.343 10:22:14 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:13.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.343 --rc genhtml_branch_coverage=1 00:04:13.343 --rc genhtml_function_coverage=1 00:04:13.343 --rc genhtml_legend=1 00:04:13.343 --rc geninfo_all_blocks=1 00:04:13.343 --rc geninfo_unexecuted_blocks=1 00:04:13.343 00:04:13.343 ' 00:04:13.343 10:22:14 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:13.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.343 --rc genhtml_branch_coverage=1 00:04:13.343 --rc genhtml_function_coverage=1 00:04:13.343 --rc genhtml_legend=1 00:04:13.343 --rc geninfo_all_blocks=1 00:04:13.343 --rc geninfo_unexecuted_blocks=1 00:04:13.343 00:04:13.343 ' 00:04:13.343 10:22:14 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:13.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.343 --rc genhtml_branch_coverage=1 00:04:13.343 --rc genhtml_function_coverage=1 00:04:13.343 --rc genhtml_legend=1 00:04:13.343 --rc geninfo_all_blocks=1 00:04:13.343 --rc geninfo_unexecuted_blocks=1 00:04:13.343 00:04:13.343 ' 00:04:13.343 10:22:14 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:13.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.343 --rc genhtml_branch_coverage=1 00:04:13.343 --rc genhtml_function_coverage=1 00:04:13.343 --rc genhtml_legend=1 00:04:13.343 --rc geninfo_all_blocks=1 00:04:13.343 --rc geninfo_unexecuted_blocks=1 00:04:13.343 00:04:13.343 ' 00:04:13.343 10:22:14 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:13.343 10:22:14 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:13.343 10:22:14 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:13.343 10:22:14 env -- common/autotest_common.sh@10 -- # set +x 00:04:13.343 ************************************ 00:04:13.343 START TEST env_memory 00:04:13.343 ************************************ 00:04:13.343 10:22:14 env.env_memory -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:13.343 00:04:13.343 00:04:13.343 CUnit - A unit testing framework for C - Version 2.1-3 00:04:13.343 http://cunit.sourceforge.net/ 00:04:13.343 00:04:13.343 00:04:13.343 Suite: memory 00:04:13.343 Test: alloc and free memory map ...[2024-11-15 10:22:14.114522] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:13.343 passed 00:04:13.343 Test: mem map translation ...[2024-11-15 10:22:14.145595] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:13.343 [2024-11-15 10:22:14.145639] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:13.343 [2024-11-15 10:22:14.145695] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:13.343 [2024-11-15 10:22:14.145706] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:13.601 passed 00:04:13.601 Test: mem map registration ...[2024-11-15 10:22:14.209169] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:13.601 [2024-11-15 10:22:14.209202] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:13.601 passed 00:04:13.601 Test: mem map adjacent registrations ...passed 00:04:13.601 00:04:13.601 Run Summary: Type Total Ran Passed Failed Inactive 00:04:13.601 suites 1 1 n/a 0 0 00:04:13.601 tests 4 4 4 0 0 00:04:13.601 asserts 152 152 152 0 n/a 00:04:13.601 00:04:13.601 Elapsed time = 0.196 seconds 00:04:13.601 00:04:13.601 real 0m0.214s 00:04:13.601 user 0m0.200s 00:04:13.601 sys 0m0.010s 00:04:13.601 10:22:14 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:13.601 10:22:14 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:13.601 ************************************ 00:04:13.601 END TEST env_memory 00:04:13.601 ************************************ 00:04:13.601 10:22:14 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:13.601 10:22:14 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:13.601 10:22:14 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:13.601 10:22:14 env -- common/autotest_common.sh@10 -- # set +x 00:04:13.601 ************************************ 00:04:13.601 START TEST env_vtophys 00:04:13.601 ************************************ 00:04:13.601 10:22:14 env.env_vtophys -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:13.601 EAL: lib.eal log level changed from notice to debug 00:04:13.601 EAL: Detected lcore 0 as core 0 on socket 0 00:04:13.601 EAL: Detected lcore 1 as core 0 on socket 0 00:04:13.601 EAL: Detected lcore 2 as core 0 on socket 0 00:04:13.601 EAL: Detected lcore 3 as core 0 on socket 0 00:04:13.601 EAL: Detected lcore 4 as core 0 on socket 0 00:04:13.601 EAL: Detected lcore 5 as core 0 on socket 0 00:04:13.601 EAL: Detected lcore 6 as core 0 on socket 0 00:04:13.601 EAL: Detected lcore 7 as core 0 on socket 0 00:04:13.601 EAL: Detected lcore 8 as core 0 on socket 0 00:04:13.601 EAL: Detected lcore 9 as core 0 on socket 0 00:04:13.601 EAL: Maximum logical cores by configuration: 128 00:04:13.601 EAL: Detected CPU lcores: 10 00:04:13.602 EAL: Detected NUMA nodes: 1 00:04:13.602 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:13.602 EAL: Detected shared linkage of DPDK 00:04:13.602 EAL: No shared files mode enabled, IPC will be disabled 00:04:13.602 EAL: Selected IOVA mode 'PA' 00:04:13.602 EAL: Probing VFIO support... 00:04:13.602 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:13.602 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:13.602 EAL: Ask a virtual area of 0x2e000 bytes 00:04:13.602 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:13.602 EAL: Setting up physically contiguous memory... 00:04:13.602 EAL: Setting maximum number of open files to 524288 00:04:13.602 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:13.602 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:13.602 EAL: Ask a virtual area of 0x61000 bytes 00:04:13.602 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:13.602 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:13.602 EAL: Ask a virtual area of 0x400000000 bytes 00:04:13.602 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:13.602 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:13.602 EAL: Ask a virtual area of 0x61000 bytes 00:04:13.602 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:13.602 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:13.602 EAL: Ask a virtual area of 0x400000000 bytes 00:04:13.602 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:13.602 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:13.602 EAL: Ask a virtual area of 0x61000 bytes 00:04:13.602 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:13.602 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:13.602 EAL: Ask a virtual area of 0x400000000 bytes 00:04:13.602 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:13.602 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:13.602 EAL: Ask a virtual area of 0x61000 bytes 00:04:13.602 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:13.602 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:13.602 EAL: Ask a virtual area of 0x400000000 bytes 00:04:13.602 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:13.602 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:13.602 EAL: Hugepages will be freed exactly as allocated. 00:04:13.602 EAL: No shared files mode enabled, IPC is disabled 00:04:13.602 EAL: No shared files mode enabled, IPC is disabled 00:04:13.860 EAL: TSC frequency is ~2200000 KHz 00:04:13.860 EAL: Main lcore 0 is ready (tid=7f679882ea00;cpuset=[0]) 00:04:13.860 EAL: Trying to obtain current memory policy. 00:04:13.860 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.860 EAL: Restoring previous memory policy: 0 00:04:13.860 EAL: request: mp_malloc_sync 00:04:13.860 EAL: No shared files mode enabled, IPC is disabled 00:04:13.860 EAL: Heap on socket 0 was expanded by 2MB 00:04:13.860 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:13.860 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:13.860 EAL: Mem event callback 'spdk:(nil)' registered 00:04:13.860 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:13.860 00:04:13.860 00:04:13.860 CUnit - A unit testing framework for C - Version 2.1-3 00:04:13.860 http://cunit.sourceforge.net/ 00:04:13.860 00:04:13.860 00:04:13.860 Suite: components_suite 00:04:13.860 Test: vtophys_malloc_test ...passed 00:04:13.860 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:13.860 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.860 EAL: Restoring previous memory policy: 4 00:04:13.860 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.860 EAL: request: mp_malloc_sync 00:04:13.860 EAL: No shared files mode enabled, IPC is disabled 00:04:13.860 EAL: Heap on socket 0 was expanded by 4MB 00:04:13.860 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.860 EAL: request: mp_malloc_sync 00:04:13.860 EAL: No shared files mode enabled, IPC is disabled 00:04:13.860 EAL: Heap on socket 0 was shrunk by 4MB 00:04:13.860 EAL: Trying to obtain current memory policy. 00:04:13.860 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.860 EAL: Restoring previous memory policy: 4 00:04:13.860 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.860 EAL: request: mp_malloc_sync 00:04:13.860 EAL: No shared files mode enabled, IPC is disabled 00:04:13.860 EAL: Heap on socket 0 was expanded by 6MB 00:04:13.860 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.860 EAL: request: mp_malloc_sync 00:04:13.860 EAL: No shared files mode enabled, IPC is disabled 00:04:13.860 EAL: Heap on socket 0 was shrunk by 6MB 00:04:13.860 EAL: Trying to obtain current memory policy. 00:04:13.860 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.860 EAL: Restoring previous memory policy: 4 00:04:13.860 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.860 EAL: request: mp_malloc_sync 00:04:13.860 EAL: No shared files mode enabled, IPC is disabled 00:04:13.860 EAL: Heap on socket 0 was expanded by 10MB 00:04:13.860 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.860 EAL: request: mp_malloc_sync 00:04:13.860 EAL: No shared files mode enabled, IPC is disabled 00:04:13.860 EAL: Heap on socket 0 was shrunk by 10MB 00:04:13.860 EAL: Trying to obtain current memory policy. 00:04:13.860 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.860 EAL: Restoring previous memory policy: 4 00:04:13.860 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.860 EAL: request: mp_malloc_sync 00:04:13.860 EAL: No shared files mode enabled, IPC is disabled 00:04:13.860 EAL: Heap on socket 0 was expanded by 18MB 00:04:13.860 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.860 EAL: request: mp_malloc_sync 00:04:13.860 EAL: No shared files mode enabled, IPC is disabled 00:04:13.860 EAL: Heap on socket 0 was shrunk by 18MB 00:04:13.860 EAL: Trying to obtain current memory policy. 00:04:13.860 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.860 EAL: Restoring previous memory policy: 4 00:04:13.860 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.860 EAL: request: mp_malloc_sync 00:04:13.860 EAL: No shared files mode enabled, IPC is disabled 00:04:13.860 EAL: Heap on socket 0 was expanded by 34MB 00:04:13.860 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.860 EAL: request: mp_malloc_sync 00:04:13.860 EAL: No shared files mode enabled, IPC is disabled 00:04:13.860 EAL: Heap on socket 0 was shrunk by 34MB 00:04:13.860 EAL: Trying to obtain current memory policy. 00:04:13.860 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.860 EAL: Restoring previous memory policy: 4 00:04:13.860 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.860 EAL: request: mp_malloc_sync 00:04:13.860 EAL: No shared files mode enabled, IPC is disabled 00:04:13.860 EAL: Heap on socket 0 was expanded by 66MB 00:04:13.860 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.860 EAL: request: mp_malloc_sync 00:04:13.860 EAL: No shared files mode enabled, IPC is disabled 00:04:13.860 EAL: Heap on socket 0 was shrunk by 66MB 00:04:13.860 EAL: Trying to obtain current memory policy. 00:04:13.860 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.860 EAL: Restoring previous memory policy: 4 00:04:13.860 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.860 EAL: request: mp_malloc_sync 00:04:13.860 EAL: No shared files mode enabled, IPC is disabled 00:04:13.860 EAL: Heap on socket 0 was expanded by 130MB 00:04:13.860 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.860 EAL: request: mp_malloc_sync 00:04:13.860 EAL: No shared files mode enabled, IPC is disabled 00:04:13.860 EAL: Heap on socket 0 was shrunk by 130MB 00:04:13.860 EAL: Trying to obtain current memory policy. 00:04:13.860 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.860 EAL: Restoring previous memory policy: 4 00:04:13.860 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.860 EAL: request: mp_malloc_sync 00:04:13.860 EAL: No shared files mode enabled, IPC is disabled 00:04:13.860 EAL: Heap on socket 0 was expanded by 258MB 00:04:14.118 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.118 EAL: request: mp_malloc_sync 00:04:14.118 EAL: No shared files mode enabled, IPC is disabled 00:04:14.118 EAL: Heap on socket 0 was shrunk by 258MB 00:04:14.118 EAL: Trying to obtain current memory policy. 00:04:14.118 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:14.118 EAL: Restoring previous memory policy: 4 00:04:14.118 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.118 EAL: request: mp_malloc_sync 00:04:14.118 EAL: No shared files mode enabled, IPC is disabled 00:04:14.118 EAL: Heap on socket 0 was expanded by 514MB 00:04:14.376 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.376 EAL: request: mp_malloc_sync 00:04:14.376 EAL: No shared files mode enabled, IPC is disabled 00:04:14.376 EAL: Heap on socket 0 was shrunk by 514MB 00:04:14.376 EAL: Trying to obtain current memory policy. 00:04:14.376 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:14.634 EAL: Restoring previous memory policy: 4 00:04:14.634 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.634 EAL: request: mp_malloc_sync 00:04:14.634 EAL: No shared files mode enabled, IPC is disabled 00:04:14.634 EAL: Heap on socket 0 was expanded by 1026MB 00:04:14.913 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.177 passed 00:04:15.177 00:04:15.177 Run Summary: Type Total Ran Passed Failed Inactive 00:04:15.177 suites 1 1 n/a 0 0 00:04:15.177 tests 2 2 2 0 0 00:04:15.177 asserts 5568 5568 5568 0 n/a 00:04:15.177 00:04:15.177 Elapsed time = 1.238 seconds 00:04:15.177 EAL: request: mp_malloc_sync 00:04:15.177 EAL: No shared files mode enabled, IPC is disabled 00:04:15.177 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:15.177 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.177 EAL: request: mp_malloc_sync 00:04:15.177 EAL: No shared files mode enabled, IPC is disabled 00:04:15.177 EAL: Heap on socket 0 was shrunk by 2MB 00:04:15.177 EAL: No shared files mode enabled, IPC is disabled 00:04:15.177 EAL: No shared files mode enabled, IPC is disabled 00:04:15.177 EAL: No shared files mode enabled, IPC is disabled 00:04:15.177 00:04:15.177 real 0m1.450s 00:04:15.177 user 0m0.793s 00:04:15.177 sys 0m0.521s 00:04:15.177 ************************************ 00:04:15.177 END TEST env_vtophys 00:04:15.177 ************************************ 00:04:15.177 10:22:15 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:15.177 10:22:15 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:15.177 10:22:15 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:15.177 10:22:15 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:15.177 10:22:15 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:15.177 10:22:15 env -- common/autotest_common.sh@10 -- # set +x 00:04:15.177 ************************************ 00:04:15.177 START TEST env_pci 00:04:15.177 ************************************ 00:04:15.177 10:22:15 env.env_pci -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:15.177 00:04:15.177 00:04:15.177 CUnit - A unit testing framework for C - Version 2.1-3 00:04:15.177 http://cunit.sourceforge.net/ 00:04:15.177 00:04:15.177 00:04:15.177 Suite: pci 00:04:15.177 Test: pci_hook ...[2024-11-15 10:22:15.844978] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56582 has claimed it 00:04:15.177 passed 00:04:15.177 00:04:15.177 Run Summary: Type Total Ran Passed Failed Inactive 00:04:15.177 suites 1 1 n/a 0 0 00:04:15.177 tests 1 1 1 0 0 00:04:15.177 asserts 25 25 25 0 n/a 00:04:15.177 00:04:15.177 Elapsed time = 0.002 seconds 00:04:15.177 EAL: Cannot find device (10000:00:01.0) 00:04:15.177 EAL: Failed to attach device on primary process 00:04:15.177 00:04:15.177 real 0m0.019s 00:04:15.177 user 0m0.006s 00:04:15.177 sys 0m0.013s 00:04:15.177 10:22:15 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:15.177 ************************************ 00:04:15.177 END TEST env_pci 00:04:15.177 ************************************ 00:04:15.177 10:22:15 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:15.177 10:22:15 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:15.177 10:22:15 env -- env/env.sh@15 -- # uname 00:04:15.177 10:22:15 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:15.177 10:22:15 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:15.177 10:22:15 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:15.177 10:22:15 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:04:15.177 10:22:15 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:15.177 10:22:15 env -- common/autotest_common.sh@10 -- # set +x 00:04:15.177 ************************************ 00:04:15.177 START TEST env_dpdk_post_init 00:04:15.177 ************************************ 00:04:15.177 10:22:15 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:15.177 EAL: Detected CPU lcores: 10 00:04:15.177 EAL: Detected NUMA nodes: 1 00:04:15.177 EAL: Detected shared linkage of DPDK 00:04:15.177 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:15.177 EAL: Selected IOVA mode 'PA' 00:04:15.436 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:15.436 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:15.436 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:15.436 Starting DPDK initialization... 00:04:15.436 Starting SPDK post initialization... 00:04:15.436 SPDK NVMe probe 00:04:15.436 Attaching to 0000:00:10.0 00:04:15.436 Attaching to 0000:00:11.0 00:04:15.437 Attached to 0000:00:10.0 00:04:15.437 Attached to 0000:00:11.0 00:04:15.437 Cleaning up... 00:04:15.437 00:04:15.437 real 0m0.189s 00:04:15.437 user 0m0.056s 00:04:15.437 sys 0m0.033s 00:04:15.437 10:22:16 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:15.437 10:22:16 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:15.437 ************************************ 00:04:15.437 END TEST env_dpdk_post_init 00:04:15.437 ************************************ 00:04:15.437 10:22:16 env -- env/env.sh@26 -- # uname 00:04:15.437 10:22:16 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:15.437 10:22:16 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:15.437 10:22:16 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:15.437 10:22:16 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:15.437 10:22:16 env -- common/autotest_common.sh@10 -- # set +x 00:04:15.437 ************************************ 00:04:15.437 START TEST env_mem_callbacks 00:04:15.437 ************************************ 00:04:15.437 10:22:16 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:15.437 EAL: Detected CPU lcores: 10 00:04:15.437 EAL: Detected NUMA nodes: 1 00:04:15.437 EAL: Detected shared linkage of DPDK 00:04:15.437 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:15.437 EAL: Selected IOVA mode 'PA' 00:04:15.437 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:15.437 00:04:15.437 00:04:15.437 CUnit - A unit testing framework for C - Version 2.1-3 00:04:15.437 http://cunit.sourceforge.net/ 00:04:15.437 00:04:15.437 00:04:15.437 Suite: memory 00:04:15.437 Test: test ... 00:04:15.437 register 0x200000200000 2097152 00:04:15.437 malloc 3145728 00:04:15.437 register 0x200000400000 4194304 00:04:15.437 buf 0x200000500000 len 3145728 PASSED 00:04:15.437 malloc 64 00:04:15.437 buf 0x2000004fff40 len 64 PASSED 00:04:15.437 malloc 4194304 00:04:15.437 register 0x200000800000 6291456 00:04:15.437 buf 0x200000a00000 len 4194304 PASSED 00:04:15.437 free 0x200000500000 3145728 00:04:15.437 free 0x2000004fff40 64 00:04:15.437 unregister 0x200000400000 4194304 PASSED 00:04:15.437 free 0x200000a00000 4194304 00:04:15.437 unregister 0x200000800000 6291456 PASSED 00:04:15.437 malloc 8388608 00:04:15.437 register 0x200000400000 10485760 00:04:15.437 buf 0x200000600000 len 8388608 PASSED 00:04:15.437 free 0x200000600000 8388608 00:04:15.437 unregister 0x200000400000 10485760 PASSED 00:04:15.437 passed 00:04:15.437 00:04:15.437 Run Summary: Type Total Ran Passed Failed Inactive 00:04:15.437 suites 1 1 n/a 0 0 00:04:15.437 tests 1 1 1 0 0 00:04:15.437 asserts 15 15 15 0 n/a 00:04:15.437 00:04:15.437 Elapsed time = 0.007 seconds 00:04:15.437 00:04:15.437 real 0m0.142s 00:04:15.437 user 0m0.015s 00:04:15.437 sys 0m0.025s 00:04:15.696 10:22:16 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:15.696 10:22:16 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:15.696 ************************************ 00:04:15.696 END TEST env_mem_callbacks 00:04:15.696 ************************************ 00:04:15.696 00:04:15.696 real 0m2.444s 00:04:15.696 user 0m1.233s 00:04:15.696 sys 0m0.855s 00:04:15.696 10:22:16 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:15.696 10:22:16 env -- common/autotest_common.sh@10 -- # set +x 00:04:15.696 ************************************ 00:04:15.696 END TEST env 00:04:15.696 ************************************ 00:04:15.696 10:22:16 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:15.696 10:22:16 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:15.696 10:22:16 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:15.696 10:22:16 -- common/autotest_common.sh@10 -- # set +x 00:04:15.696 ************************************ 00:04:15.696 START TEST rpc 00:04:15.696 ************************************ 00:04:15.696 10:22:16 rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:15.696 * Looking for test storage... 00:04:15.696 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:15.696 10:22:16 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:15.696 10:22:16 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:15.696 10:22:16 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:15.696 10:22:16 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:15.696 10:22:16 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:15.696 10:22:16 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:15.696 10:22:16 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:15.696 10:22:16 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:15.696 10:22:16 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:15.696 10:22:16 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:15.696 10:22:16 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:15.696 10:22:16 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:15.696 10:22:16 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:15.696 10:22:16 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:15.696 10:22:16 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:15.696 10:22:16 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:15.696 10:22:16 rpc -- scripts/common.sh@345 -- # : 1 00:04:15.696 10:22:16 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:15.696 10:22:16 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:15.696 10:22:16 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:15.696 10:22:16 rpc -- scripts/common.sh@353 -- # local d=1 00:04:15.696 10:22:16 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:15.696 10:22:16 rpc -- scripts/common.sh@355 -- # echo 1 00:04:15.697 10:22:16 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:15.697 10:22:16 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:15.697 10:22:16 rpc -- scripts/common.sh@353 -- # local d=2 00:04:15.697 10:22:16 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:15.697 10:22:16 rpc -- scripts/common.sh@355 -- # echo 2 00:04:15.697 10:22:16 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:15.697 10:22:16 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:15.697 10:22:16 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:15.697 10:22:16 rpc -- scripts/common.sh@368 -- # return 0 00:04:15.697 10:22:16 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:15.697 10:22:16 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:15.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.697 --rc genhtml_branch_coverage=1 00:04:15.697 --rc genhtml_function_coverage=1 00:04:15.697 --rc genhtml_legend=1 00:04:15.697 --rc geninfo_all_blocks=1 00:04:15.697 --rc geninfo_unexecuted_blocks=1 00:04:15.697 00:04:15.697 ' 00:04:15.697 10:22:16 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:15.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.697 --rc genhtml_branch_coverage=1 00:04:15.697 --rc genhtml_function_coverage=1 00:04:15.697 --rc genhtml_legend=1 00:04:15.697 --rc geninfo_all_blocks=1 00:04:15.697 --rc geninfo_unexecuted_blocks=1 00:04:15.697 00:04:15.697 ' 00:04:15.697 10:22:16 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:15.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.697 --rc genhtml_branch_coverage=1 00:04:15.697 --rc genhtml_function_coverage=1 00:04:15.697 --rc genhtml_legend=1 00:04:15.697 --rc geninfo_all_blocks=1 00:04:15.697 --rc geninfo_unexecuted_blocks=1 00:04:15.697 00:04:15.697 ' 00:04:15.697 10:22:16 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:15.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.697 --rc genhtml_branch_coverage=1 00:04:15.697 --rc genhtml_function_coverage=1 00:04:15.697 --rc genhtml_legend=1 00:04:15.697 --rc geninfo_all_blocks=1 00:04:15.697 --rc geninfo_unexecuted_blocks=1 00:04:15.697 00:04:15.697 ' 00:04:15.697 10:22:16 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56705 00:04:15.697 10:22:16 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:15.697 10:22:16 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:15.697 10:22:16 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56705 00:04:15.697 10:22:16 rpc -- common/autotest_common.sh@833 -- # '[' -z 56705 ']' 00:04:15.697 10:22:16 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:15.697 10:22:16 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:15.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:15.697 10:22:16 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:15.697 10:22:16 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:15.697 10:22:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.957 [2024-11-15 10:22:16.602319] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:04:15.957 [2024-11-15 10:22:16.602418] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56705 ] 00:04:15.957 [2024-11-15 10:22:16.744156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:15.957 [2024-11-15 10:22:16.802646] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:15.957 [2024-11-15 10:22:16.802709] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56705' to capture a snapshot of events at runtime. 00:04:15.957 [2024-11-15 10:22:16.802721] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:15.957 [2024-11-15 10:22:16.802731] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:15.957 [2024-11-15 10:22:16.802738] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56705 for offline analysis/debug. 00:04:15.957 [2024-11-15 10:22:16.803179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:16.216 [2024-11-15 10:22:16.872990] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:16.476 10:22:17 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:16.476 10:22:17 rpc -- common/autotest_common.sh@866 -- # return 0 00:04:16.476 10:22:17 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:16.476 10:22:17 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:16.476 10:22:17 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:16.476 10:22:17 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:16.476 10:22:17 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:16.476 10:22:17 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:16.476 10:22:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.476 ************************************ 00:04:16.476 START TEST rpc_integrity 00:04:16.476 ************************************ 00:04:16.476 10:22:17 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:04:16.476 10:22:17 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:16.476 10:22:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.476 10:22:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.476 10:22:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.476 10:22:17 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:16.476 10:22:17 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:16.476 10:22:17 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:16.476 10:22:17 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:16.476 10:22:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.476 10:22:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.476 10:22:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.476 10:22:17 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:16.476 10:22:17 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:16.476 10:22:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.476 10:22:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.476 10:22:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.476 10:22:17 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:16.476 { 00:04:16.476 "name": "Malloc0", 00:04:16.476 "aliases": [ 00:04:16.476 "db944fc6-7226-4df5-a248-e52edcfd1acc" 00:04:16.476 ], 00:04:16.476 "product_name": "Malloc disk", 00:04:16.476 "block_size": 512, 00:04:16.476 "num_blocks": 16384, 00:04:16.476 "uuid": "db944fc6-7226-4df5-a248-e52edcfd1acc", 00:04:16.476 "assigned_rate_limits": { 00:04:16.476 "rw_ios_per_sec": 0, 00:04:16.476 "rw_mbytes_per_sec": 0, 00:04:16.476 "r_mbytes_per_sec": 0, 00:04:16.476 "w_mbytes_per_sec": 0 00:04:16.476 }, 00:04:16.476 "claimed": false, 00:04:16.476 "zoned": false, 00:04:16.476 "supported_io_types": { 00:04:16.476 "read": true, 00:04:16.476 "write": true, 00:04:16.476 "unmap": true, 00:04:16.476 "flush": true, 00:04:16.476 "reset": true, 00:04:16.476 "nvme_admin": false, 00:04:16.476 "nvme_io": false, 00:04:16.476 "nvme_io_md": false, 00:04:16.476 "write_zeroes": true, 00:04:16.476 "zcopy": true, 00:04:16.476 "get_zone_info": false, 00:04:16.476 "zone_management": false, 00:04:16.476 "zone_append": false, 00:04:16.476 "compare": false, 00:04:16.476 "compare_and_write": false, 00:04:16.476 "abort": true, 00:04:16.476 "seek_hole": false, 00:04:16.476 "seek_data": false, 00:04:16.476 "copy": true, 00:04:16.476 "nvme_iov_md": false 00:04:16.476 }, 00:04:16.476 "memory_domains": [ 00:04:16.476 { 00:04:16.476 "dma_device_id": "system", 00:04:16.476 "dma_device_type": 1 00:04:16.476 }, 00:04:16.476 { 00:04:16.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:16.476 "dma_device_type": 2 00:04:16.476 } 00:04:16.476 ], 00:04:16.476 "driver_specific": {} 00:04:16.476 } 00:04:16.476 ]' 00:04:16.476 10:22:17 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:16.476 10:22:17 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:16.476 10:22:17 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:16.476 10:22:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.476 10:22:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.476 [2024-11-15 10:22:17.236898] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:16.476 [2024-11-15 10:22:17.236959] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:16.476 [2024-11-15 10:22:17.236996] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1da1050 00:04:16.476 [2024-11-15 10:22:17.237080] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:16.476 [2024-11-15 10:22:17.238853] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:16.476 [2024-11-15 10:22:17.238891] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:16.476 Passthru0 00:04:16.476 10:22:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.476 10:22:17 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:16.476 10:22:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.476 10:22:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.476 10:22:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.476 10:22:17 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:16.476 { 00:04:16.476 "name": "Malloc0", 00:04:16.476 "aliases": [ 00:04:16.476 "db944fc6-7226-4df5-a248-e52edcfd1acc" 00:04:16.476 ], 00:04:16.476 "product_name": "Malloc disk", 00:04:16.476 "block_size": 512, 00:04:16.476 "num_blocks": 16384, 00:04:16.476 "uuid": "db944fc6-7226-4df5-a248-e52edcfd1acc", 00:04:16.476 "assigned_rate_limits": { 00:04:16.476 "rw_ios_per_sec": 0, 00:04:16.476 "rw_mbytes_per_sec": 0, 00:04:16.476 "r_mbytes_per_sec": 0, 00:04:16.476 "w_mbytes_per_sec": 0 00:04:16.476 }, 00:04:16.476 "claimed": true, 00:04:16.476 "claim_type": "exclusive_write", 00:04:16.476 "zoned": false, 00:04:16.476 "supported_io_types": { 00:04:16.476 "read": true, 00:04:16.476 "write": true, 00:04:16.476 "unmap": true, 00:04:16.476 "flush": true, 00:04:16.476 "reset": true, 00:04:16.476 "nvme_admin": false, 00:04:16.476 "nvme_io": false, 00:04:16.476 "nvme_io_md": false, 00:04:16.476 "write_zeroes": true, 00:04:16.476 "zcopy": true, 00:04:16.476 "get_zone_info": false, 00:04:16.476 "zone_management": false, 00:04:16.476 "zone_append": false, 00:04:16.476 "compare": false, 00:04:16.476 "compare_and_write": false, 00:04:16.476 "abort": true, 00:04:16.476 "seek_hole": false, 00:04:16.476 "seek_data": false, 00:04:16.476 "copy": true, 00:04:16.476 "nvme_iov_md": false 00:04:16.476 }, 00:04:16.476 "memory_domains": [ 00:04:16.476 { 00:04:16.476 "dma_device_id": "system", 00:04:16.476 "dma_device_type": 1 00:04:16.476 }, 00:04:16.476 { 00:04:16.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:16.476 "dma_device_type": 2 00:04:16.476 } 00:04:16.476 ], 00:04:16.476 "driver_specific": {} 00:04:16.476 }, 00:04:16.476 { 00:04:16.476 "name": "Passthru0", 00:04:16.476 "aliases": [ 00:04:16.476 "f0682c5e-65e2-5db1-9c56-7bcebb650cbe" 00:04:16.476 ], 00:04:16.476 "product_name": "passthru", 00:04:16.476 "block_size": 512, 00:04:16.476 "num_blocks": 16384, 00:04:16.476 "uuid": "f0682c5e-65e2-5db1-9c56-7bcebb650cbe", 00:04:16.476 "assigned_rate_limits": { 00:04:16.476 "rw_ios_per_sec": 0, 00:04:16.476 "rw_mbytes_per_sec": 0, 00:04:16.476 "r_mbytes_per_sec": 0, 00:04:16.476 "w_mbytes_per_sec": 0 00:04:16.476 }, 00:04:16.476 "claimed": false, 00:04:16.476 "zoned": false, 00:04:16.476 "supported_io_types": { 00:04:16.476 "read": true, 00:04:16.476 "write": true, 00:04:16.476 "unmap": true, 00:04:16.476 "flush": true, 00:04:16.477 "reset": true, 00:04:16.477 "nvme_admin": false, 00:04:16.477 "nvme_io": false, 00:04:16.477 "nvme_io_md": false, 00:04:16.477 "write_zeroes": true, 00:04:16.477 "zcopy": true, 00:04:16.477 "get_zone_info": false, 00:04:16.477 "zone_management": false, 00:04:16.477 "zone_append": false, 00:04:16.477 "compare": false, 00:04:16.477 "compare_and_write": false, 00:04:16.477 "abort": true, 00:04:16.477 "seek_hole": false, 00:04:16.477 "seek_data": false, 00:04:16.477 "copy": true, 00:04:16.477 "nvme_iov_md": false 00:04:16.477 }, 00:04:16.477 "memory_domains": [ 00:04:16.477 { 00:04:16.477 "dma_device_id": "system", 00:04:16.477 "dma_device_type": 1 00:04:16.477 }, 00:04:16.477 { 00:04:16.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:16.477 "dma_device_type": 2 00:04:16.477 } 00:04:16.477 ], 00:04:16.477 "driver_specific": { 00:04:16.477 "passthru": { 00:04:16.477 "name": "Passthru0", 00:04:16.477 "base_bdev_name": "Malloc0" 00:04:16.477 } 00:04:16.477 } 00:04:16.477 } 00:04:16.477 ]' 00:04:16.477 10:22:17 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:16.735 10:22:17 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:16.735 10:22:17 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:16.735 10:22:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.735 10:22:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.735 10:22:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.735 10:22:17 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:16.735 10:22:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.735 10:22:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.735 10:22:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.735 10:22:17 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:16.735 10:22:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.735 10:22:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.735 10:22:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.735 10:22:17 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:16.735 10:22:17 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:16.735 10:22:17 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:16.735 00:04:16.735 real 0m0.333s 00:04:16.735 user 0m0.215s 00:04:16.735 sys 0m0.039s 00:04:16.735 10:22:17 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:16.735 10:22:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.735 ************************************ 00:04:16.735 END TEST rpc_integrity 00:04:16.735 ************************************ 00:04:16.735 10:22:17 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:16.735 10:22:17 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:16.735 10:22:17 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:16.735 10:22:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.735 ************************************ 00:04:16.736 START TEST rpc_plugins 00:04:16.736 ************************************ 00:04:16.736 10:22:17 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:04:16.736 10:22:17 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:16.736 10:22:17 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.736 10:22:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:16.736 10:22:17 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.736 10:22:17 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:16.736 10:22:17 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:16.736 10:22:17 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.736 10:22:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:16.736 10:22:17 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.736 10:22:17 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:16.736 { 00:04:16.736 "name": "Malloc1", 00:04:16.736 "aliases": [ 00:04:16.736 "d916c575-abe8-46dd-98e0-59bbfc73d3fc" 00:04:16.736 ], 00:04:16.736 "product_name": "Malloc disk", 00:04:16.736 "block_size": 4096, 00:04:16.736 "num_blocks": 256, 00:04:16.736 "uuid": "d916c575-abe8-46dd-98e0-59bbfc73d3fc", 00:04:16.736 "assigned_rate_limits": { 00:04:16.736 "rw_ios_per_sec": 0, 00:04:16.736 "rw_mbytes_per_sec": 0, 00:04:16.736 "r_mbytes_per_sec": 0, 00:04:16.736 "w_mbytes_per_sec": 0 00:04:16.736 }, 00:04:16.736 "claimed": false, 00:04:16.736 "zoned": false, 00:04:16.736 "supported_io_types": { 00:04:16.736 "read": true, 00:04:16.736 "write": true, 00:04:16.736 "unmap": true, 00:04:16.736 "flush": true, 00:04:16.736 "reset": true, 00:04:16.736 "nvme_admin": false, 00:04:16.736 "nvme_io": false, 00:04:16.736 "nvme_io_md": false, 00:04:16.736 "write_zeroes": true, 00:04:16.736 "zcopy": true, 00:04:16.736 "get_zone_info": false, 00:04:16.736 "zone_management": false, 00:04:16.736 "zone_append": false, 00:04:16.736 "compare": false, 00:04:16.736 "compare_and_write": false, 00:04:16.736 "abort": true, 00:04:16.736 "seek_hole": false, 00:04:16.736 "seek_data": false, 00:04:16.736 "copy": true, 00:04:16.736 "nvme_iov_md": false 00:04:16.736 }, 00:04:16.736 "memory_domains": [ 00:04:16.736 { 00:04:16.736 "dma_device_id": "system", 00:04:16.736 "dma_device_type": 1 00:04:16.736 }, 00:04:16.736 { 00:04:16.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:16.736 "dma_device_type": 2 00:04:16.736 } 00:04:16.736 ], 00:04:16.736 "driver_specific": {} 00:04:16.736 } 00:04:16.736 ]' 00:04:16.736 10:22:17 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:16.736 10:22:17 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:16.736 10:22:17 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:16.736 10:22:17 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.736 10:22:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:16.736 10:22:17 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.736 10:22:17 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:16.736 10:22:17 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.736 10:22:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:16.736 10:22:17 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.736 10:22:17 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:16.736 10:22:17 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:16.994 10:22:17 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:16.994 00:04:16.994 real 0m0.170s 00:04:16.994 user 0m0.106s 00:04:16.994 sys 0m0.026s 00:04:16.994 ************************************ 00:04:16.994 END TEST rpc_plugins 00:04:16.994 10:22:17 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:16.994 10:22:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:16.994 ************************************ 00:04:16.994 10:22:17 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:16.994 10:22:17 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:16.994 10:22:17 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:16.994 10:22:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.994 ************************************ 00:04:16.994 START TEST rpc_trace_cmd_test 00:04:16.994 ************************************ 00:04:16.994 10:22:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:04:16.994 10:22:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:16.994 10:22:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:16.994 10:22:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.994 10:22:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:16.994 10:22:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.994 10:22:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:16.994 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56705", 00:04:16.994 "tpoint_group_mask": "0x8", 00:04:16.994 "iscsi_conn": { 00:04:16.994 "mask": "0x2", 00:04:16.994 "tpoint_mask": "0x0" 00:04:16.994 }, 00:04:16.994 "scsi": { 00:04:16.994 "mask": "0x4", 00:04:16.994 "tpoint_mask": "0x0" 00:04:16.994 }, 00:04:16.994 "bdev": { 00:04:16.994 "mask": "0x8", 00:04:16.994 "tpoint_mask": "0xffffffffffffffff" 00:04:16.994 }, 00:04:16.994 "nvmf_rdma": { 00:04:16.994 "mask": "0x10", 00:04:16.994 "tpoint_mask": "0x0" 00:04:16.994 }, 00:04:16.994 "nvmf_tcp": { 00:04:16.994 "mask": "0x20", 00:04:16.994 "tpoint_mask": "0x0" 00:04:16.994 }, 00:04:16.994 "ftl": { 00:04:16.994 "mask": "0x40", 00:04:16.994 "tpoint_mask": "0x0" 00:04:16.994 }, 00:04:16.994 "blobfs": { 00:04:16.994 "mask": "0x80", 00:04:16.994 "tpoint_mask": "0x0" 00:04:16.994 }, 00:04:16.994 "dsa": { 00:04:16.994 "mask": "0x200", 00:04:16.994 "tpoint_mask": "0x0" 00:04:16.995 }, 00:04:16.995 "thread": { 00:04:16.995 "mask": "0x400", 00:04:16.995 "tpoint_mask": "0x0" 00:04:16.995 }, 00:04:16.995 "nvme_pcie": { 00:04:16.995 "mask": "0x800", 00:04:16.995 "tpoint_mask": "0x0" 00:04:16.995 }, 00:04:16.995 "iaa": { 00:04:16.995 "mask": "0x1000", 00:04:16.995 "tpoint_mask": "0x0" 00:04:16.995 }, 00:04:16.995 "nvme_tcp": { 00:04:16.995 "mask": "0x2000", 00:04:16.995 "tpoint_mask": "0x0" 00:04:16.995 }, 00:04:16.995 "bdev_nvme": { 00:04:16.995 "mask": "0x4000", 00:04:16.995 "tpoint_mask": "0x0" 00:04:16.995 }, 00:04:16.995 "sock": { 00:04:16.995 "mask": "0x8000", 00:04:16.995 "tpoint_mask": "0x0" 00:04:16.995 }, 00:04:16.995 "blob": { 00:04:16.995 "mask": "0x10000", 00:04:16.995 "tpoint_mask": "0x0" 00:04:16.995 }, 00:04:16.995 "bdev_raid": { 00:04:16.995 "mask": "0x20000", 00:04:16.995 "tpoint_mask": "0x0" 00:04:16.995 }, 00:04:16.995 "scheduler": { 00:04:16.995 "mask": "0x40000", 00:04:16.995 "tpoint_mask": "0x0" 00:04:16.995 } 00:04:16.995 }' 00:04:16.995 10:22:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:16.995 10:22:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:16.995 10:22:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:16.995 10:22:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:16.995 10:22:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:17.254 10:22:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:17.254 10:22:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:17.254 10:22:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:17.254 10:22:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:17.254 10:22:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:17.254 00:04:17.254 real 0m0.282s 00:04:17.254 user 0m0.246s 00:04:17.254 sys 0m0.027s 00:04:17.254 10:22:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:17.254 10:22:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:17.254 ************************************ 00:04:17.254 END TEST rpc_trace_cmd_test 00:04:17.254 ************************************ 00:04:17.254 10:22:17 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:17.254 10:22:18 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:17.254 10:22:18 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:17.254 10:22:18 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:17.254 10:22:18 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:17.254 10:22:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.254 ************************************ 00:04:17.254 START TEST rpc_daemon_integrity 00:04:17.254 ************************************ 00:04:17.254 10:22:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:04:17.254 10:22:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:17.254 10:22:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:17.254 10:22:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.254 10:22:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:17.254 10:22:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:17.254 10:22:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:17.254 10:22:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:17.254 10:22:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:17.254 10:22:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:17.254 10:22:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.254 10:22:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:17.254 10:22:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:17.254 10:22:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:17.254 10:22:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:17.254 10:22:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.254 10:22:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:17.254 10:22:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:17.254 { 00:04:17.254 "name": "Malloc2", 00:04:17.254 "aliases": [ 00:04:17.254 "ec943f25-b142-479b-8372-9d5beb56119b" 00:04:17.254 ], 00:04:17.254 "product_name": "Malloc disk", 00:04:17.254 "block_size": 512, 00:04:17.254 "num_blocks": 16384, 00:04:17.254 "uuid": "ec943f25-b142-479b-8372-9d5beb56119b", 00:04:17.255 "assigned_rate_limits": { 00:04:17.255 "rw_ios_per_sec": 0, 00:04:17.255 "rw_mbytes_per_sec": 0, 00:04:17.255 "r_mbytes_per_sec": 0, 00:04:17.255 "w_mbytes_per_sec": 0 00:04:17.255 }, 00:04:17.255 "claimed": false, 00:04:17.255 "zoned": false, 00:04:17.255 "supported_io_types": { 00:04:17.255 "read": true, 00:04:17.255 "write": true, 00:04:17.255 "unmap": true, 00:04:17.255 "flush": true, 00:04:17.255 "reset": true, 00:04:17.255 "nvme_admin": false, 00:04:17.255 "nvme_io": false, 00:04:17.255 "nvme_io_md": false, 00:04:17.255 "write_zeroes": true, 00:04:17.255 "zcopy": true, 00:04:17.255 "get_zone_info": false, 00:04:17.255 "zone_management": false, 00:04:17.255 "zone_append": false, 00:04:17.255 "compare": false, 00:04:17.255 "compare_and_write": false, 00:04:17.255 "abort": true, 00:04:17.255 "seek_hole": false, 00:04:17.255 "seek_data": false, 00:04:17.255 "copy": true, 00:04:17.255 "nvme_iov_md": false 00:04:17.255 }, 00:04:17.255 "memory_domains": [ 00:04:17.255 { 00:04:17.255 "dma_device_id": "system", 00:04:17.255 "dma_device_type": 1 00:04:17.255 }, 00:04:17.255 { 00:04:17.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:17.255 "dma_device_type": 2 00:04:17.255 } 00:04:17.255 ], 00:04:17.255 "driver_specific": {} 00:04:17.255 } 00:04:17.255 ]' 00:04:17.514 10:22:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:17.514 10:22:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:17.514 10:22:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:17.514 10:22:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:17.514 10:22:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.514 [2024-11-15 10:22:18.161645] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:17.514 [2024-11-15 10:22:18.161730] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:17.514 [2024-11-15 10:22:18.161749] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ee41e0 00:04:17.514 [2024-11-15 10:22:18.161760] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:17.514 [2024-11-15 10:22:18.163326] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:17.514 [2024-11-15 10:22:18.163363] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:17.514 Passthru0 00:04:17.514 10:22:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:17.514 10:22:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:17.514 10:22:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:17.514 10:22:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.514 10:22:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:17.514 10:22:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:17.514 { 00:04:17.514 "name": "Malloc2", 00:04:17.514 "aliases": [ 00:04:17.514 "ec943f25-b142-479b-8372-9d5beb56119b" 00:04:17.514 ], 00:04:17.514 "product_name": "Malloc disk", 00:04:17.514 "block_size": 512, 00:04:17.514 "num_blocks": 16384, 00:04:17.514 "uuid": "ec943f25-b142-479b-8372-9d5beb56119b", 00:04:17.514 "assigned_rate_limits": { 00:04:17.514 "rw_ios_per_sec": 0, 00:04:17.514 "rw_mbytes_per_sec": 0, 00:04:17.514 "r_mbytes_per_sec": 0, 00:04:17.514 "w_mbytes_per_sec": 0 00:04:17.514 }, 00:04:17.514 "claimed": true, 00:04:17.514 "claim_type": "exclusive_write", 00:04:17.514 "zoned": false, 00:04:17.514 "supported_io_types": { 00:04:17.514 "read": true, 00:04:17.514 "write": true, 00:04:17.514 "unmap": true, 00:04:17.514 "flush": true, 00:04:17.514 "reset": true, 00:04:17.514 "nvme_admin": false, 00:04:17.514 "nvme_io": false, 00:04:17.514 "nvme_io_md": false, 00:04:17.514 "write_zeroes": true, 00:04:17.514 "zcopy": true, 00:04:17.514 "get_zone_info": false, 00:04:17.514 "zone_management": false, 00:04:17.514 "zone_append": false, 00:04:17.514 "compare": false, 00:04:17.514 "compare_and_write": false, 00:04:17.514 "abort": true, 00:04:17.514 "seek_hole": false, 00:04:17.514 "seek_data": false, 00:04:17.514 "copy": true, 00:04:17.514 "nvme_iov_md": false 00:04:17.514 }, 00:04:17.514 "memory_domains": [ 00:04:17.514 { 00:04:17.514 "dma_device_id": "system", 00:04:17.514 "dma_device_type": 1 00:04:17.514 }, 00:04:17.514 { 00:04:17.514 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:17.514 "dma_device_type": 2 00:04:17.514 } 00:04:17.514 ], 00:04:17.514 "driver_specific": {} 00:04:17.514 }, 00:04:17.514 { 00:04:17.514 "name": "Passthru0", 00:04:17.514 "aliases": [ 00:04:17.514 "dbabb17d-2109-59f5-968f-098b9de86a7c" 00:04:17.514 ], 00:04:17.514 "product_name": "passthru", 00:04:17.514 "block_size": 512, 00:04:17.514 "num_blocks": 16384, 00:04:17.514 "uuid": "dbabb17d-2109-59f5-968f-098b9de86a7c", 00:04:17.514 "assigned_rate_limits": { 00:04:17.514 "rw_ios_per_sec": 0, 00:04:17.514 "rw_mbytes_per_sec": 0, 00:04:17.514 "r_mbytes_per_sec": 0, 00:04:17.514 "w_mbytes_per_sec": 0 00:04:17.514 }, 00:04:17.514 "claimed": false, 00:04:17.514 "zoned": false, 00:04:17.514 "supported_io_types": { 00:04:17.514 "read": true, 00:04:17.514 "write": true, 00:04:17.514 "unmap": true, 00:04:17.514 "flush": true, 00:04:17.514 "reset": true, 00:04:17.514 "nvme_admin": false, 00:04:17.514 "nvme_io": false, 00:04:17.514 "nvme_io_md": false, 00:04:17.514 "write_zeroes": true, 00:04:17.514 "zcopy": true, 00:04:17.514 "get_zone_info": false, 00:04:17.514 "zone_management": false, 00:04:17.514 "zone_append": false, 00:04:17.514 "compare": false, 00:04:17.514 "compare_and_write": false, 00:04:17.514 "abort": true, 00:04:17.514 "seek_hole": false, 00:04:17.514 "seek_data": false, 00:04:17.514 "copy": true, 00:04:17.514 "nvme_iov_md": false 00:04:17.514 }, 00:04:17.514 "memory_domains": [ 00:04:17.514 { 00:04:17.514 "dma_device_id": "system", 00:04:17.514 "dma_device_type": 1 00:04:17.514 }, 00:04:17.514 { 00:04:17.514 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:17.514 "dma_device_type": 2 00:04:17.514 } 00:04:17.514 ], 00:04:17.514 "driver_specific": { 00:04:17.514 "passthru": { 00:04:17.514 "name": "Passthru0", 00:04:17.514 "base_bdev_name": "Malloc2" 00:04:17.514 } 00:04:17.514 } 00:04:17.514 } 00:04:17.514 ]' 00:04:17.514 10:22:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:17.514 10:22:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:17.514 10:22:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:17.514 10:22:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:17.514 10:22:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.514 10:22:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:17.514 10:22:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:17.514 10:22:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:17.514 10:22:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.514 10:22:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:17.514 10:22:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:17.514 10:22:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:17.514 10:22:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.514 10:22:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:17.514 10:22:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:17.514 10:22:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:17.514 10:22:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:17.514 00:04:17.514 real 0m0.313s 00:04:17.514 user 0m0.218s 00:04:17.514 sys 0m0.035s 00:04:17.514 10:22:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:17.514 10:22:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.514 ************************************ 00:04:17.514 END TEST rpc_daemon_integrity 00:04:17.514 ************************************ 00:04:17.514 10:22:18 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:17.514 10:22:18 rpc -- rpc/rpc.sh@84 -- # killprocess 56705 00:04:17.514 10:22:18 rpc -- common/autotest_common.sh@952 -- # '[' -z 56705 ']' 00:04:17.514 10:22:18 rpc -- common/autotest_common.sh@956 -- # kill -0 56705 00:04:17.514 10:22:18 rpc -- common/autotest_common.sh@957 -- # uname 00:04:17.514 10:22:18 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:17.514 10:22:18 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 56705 00:04:17.774 10:22:18 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:17.774 killing process with pid 56705 00:04:17.774 10:22:18 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:17.774 10:22:18 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 56705' 00:04:17.774 10:22:18 rpc -- common/autotest_common.sh@971 -- # kill 56705 00:04:17.774 10:22:18 rpc -- common/autotest_common.sh@976 -- # wait 56705 00:04:18.033 00:04:18.033 real 0m2.396s 00:04:18.033 user 0m3.078s 00:04:18.033 sys 0m0.637s 00:04:18.033 10:22:18 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:18.033 10:22:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:18.033 ************************************ 00:04:18.033 END TEST rpc 00:04:18.033 ************************************ 00:04:18.033 10:22:18 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:18.033 10:22:18 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:18.033 10:22:18 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:18.033 10:22:18 -- common/autotest_common.sh@10 -- # set +x 00:04:18.033 ************************************ 00:04:18.033 START TEST skip_rpc 00:04:18.033 ************************************ 00:04:18.033 10:22:18 skip_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:18.033 * Looking for test storage... 00:04:18.292 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:18.292 10:22:18 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:18.292 10:22:18 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:18.292 10:22:18 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:18.292 10:22:18 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:18.292 10:22:18 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:18.292 10:22:18 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:18.292 10:22:18 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:18.292 10:22:18 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:18.292 10:22:18 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:18.292 10:22:18 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:18.292 10:22:18 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:18.292 10:22:18 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:18.292 10:22:18 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:18.292 10:22:18 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:18.292 10:22:18 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:18.292 10:22:18 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:18.292 10:22:18 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:18.292 10:22:18 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:18.292 10:22:18 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:18.292 10:22:18 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:18.292 10:22:18 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:18.292 10:22:18 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:18.292 10:22:18 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:18.292 10:22:18 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:18.292 10:22:18 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:18.292 10:22:18 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:18.292 10:22:18 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:18.292 10:22:18 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:18.292 10:22:18 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:18.292 10:22:18 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:18.292 10:22:18 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:18.292 10:22:18 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:18.292 10:22:18 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:18.292 10:22:18 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:18.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.292 --rc genhtml_branch_coverage=1 00:04:18.292 --rc genhtml_function_coverage=1 00:04:18.292 --rc genhtml_legend=1 00:04:18.292 --rc geninfo_all_blocks=1 00:04:18.292 --rc geninfo_unexecuted_blocks=1 00:04:18.292 00:04:18.292 ' 00:04:18.292 10:22:18 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:18.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.292 --rc genhtml_branch_coverage=1 00:04:18.292 --rc genhtml_function_coverage=1 00:04:18.292 --rc genhtml_legend=1 00:04:18.292 --rc geninfo_all_blocks=1 00:04:18.292 --rc geninfo_unexecuted_blocks=1 00:04:18.292 00:04:18.292 ' 00:04:18.292 10:22:18 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:18.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.292 --rc genhtml_branch_coverage=1 00:04:18.292 --rc genhtml_function_coverage=1 00:04:18.292 --rc genhtml_legend=1 00:04:18.292 --rc geninfo_all_blocks=1 00:04:18.292 --rc geninfo_unexecuted_blocks=1 00:04:18.292 00:04:18.292 ' 00:04:18.292 10:22:18 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:18.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.292 --rc genhtml_branch_coverage=1 00:04:18.292 --rc genhtml_function_coverage=1 00:04:18.292 --rc genhtml_legend=1 00:04:18.292 --rc geninfo_all_blocks=1 00:04:18.292 --rc geninfo_unexecuted_blocks=1 00:04:18.292 00:04:18.292 ' 00:04:18.292 10:22:18 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:18.292 10:22:18 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:18.292 10:22:18 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:18.292 10:22:18 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:18.292 10:22:18 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:18.292 10:22:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:18.292 ************************************ 00:04:18.292 START TEST skip_rpc 00:04:18.292 ************************************ 00:04:18.292 10:22:18 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:04:18.292 10:22:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=56904 00:04:18.292 10:22:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:18.292 10:22:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:18.292 10:22:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:18.292 [2024-11-15 10:22:19.048424] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:04:18.292 [2024-11-15 10:22:19.048516] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56904 ] 00:04:18.550 [2024-11-15 10:22:19.188793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:18.550 [2024-11-15 10:22:19.249556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:18.550 [2024-11-15 10:22:19.321364] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:23.818 10:22:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:23.818 10:22:23 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:23.818 10:22:23 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:23.818 10:22:23 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:23.818 10:22:23 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:23.818 10:22:23 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:23.818 10:22:23 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:23.818 10:22:23 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:23.818 10:22:23 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:23.818 10:22:23 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.818 10:22:23 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:23.818 10:22:24 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:23.818 10:22:24 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:23.818 10:22:24 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:23.818 10:22:24 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:23.818 10:22:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:23.818 10:22:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 56904 00:04:23.818 10:22:24 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 56904 ']' 00:04:23.818 10:22:24 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 56904 00:04:23.818 10:22:24 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:04:23.818 10:22:24 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:23.818 10:22:24 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 56904 00:04:23.818 10:22:24 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:23.818 10:22:24 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:23.818 killing process with pid 56904 00:04:23.818 10:22:24 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 56904' 00:04:23.818 10:22:24 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 56904 00:04:23.818 10:22:24 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 56904 00:04:23.818 00:04:23.818 real 0m5.426s 00:04:23.818 user 0m5.041s 00:04:23.818 sys 0m0.296s 00:04:23.818 10:22:24 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:23.818 10:22:24 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.818 ************************************ 00:04:23.818 END TEST skip_rpc 00:04:23.818 ************************************ 00:04:23.818 10:22:24 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:23.818 10:22:24 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:23.818 10:22:24 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:23.818 10:22:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.818 ************************************ 00:04:23.818 START TEST skip_rpc_with_json 00:04:23.818 ************************************ 00:04:23.818 10:22:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:04:23.818 10:22:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:23.818 10:22:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=56985 00:04:23.818 10:22:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:23.818 10:22:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:23.818 10:22:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 56985 00:04:23.818 10:22:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 56985 ']' 00:04:23.818 10:22:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:23.818 10:22:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:23.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:23.818 10:22:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:23.818 10:22:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:23.818 10:22:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:23.818 [2024-11-15 10:22:24.531517] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:04:23.818 [2024-11-15 10:22:24.531634] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56985 ] 00:04:24.076 [2024-11-15 10:22:24.680160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.076 [2024-11-15 10:22:24.742144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.076 [2024-11-15 10:22:24.816942] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:25.014 10:22:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:25.014 10:22:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:04:25.014 10:22:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:25.014 10:22:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:25.014 10:22:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:25.014 [2024-11-15 10:22:25.552771] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:25.014 request: 00:04:25.014 { 00:04:25.014 "trtype": "tcp", 00:04:25.014 "method": "nvmf_get_transports", 00:04:25.014 "req_id": 1 00:04:25.014 } 00:04:25.014 Got JSON-RPC error response 00:04:25.014 response: 00:04:25.014 { 00:04:25.014 "code": -19, 00:04:25.014 "message": "No such device" 00:04:25.014 } 00:04:25.014 10:22:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:25.014 10:22:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:25.014 10:22:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:25.014 10:22:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:25.014 [2024-11-15 10:22:25.564905] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:25.014 10:22:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:25.014 10:22:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:25.014 10:22:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:25.014 10:22:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:25.014 10:22:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:25.014 10:22:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:25.014 { 00:04:25.014 "subsystems": [ 00:04:25.014 { 00:04:25.015 "subsystem": "fsdev", 00:04:25.015 "config": [ 00:04:25.015 { 00:04:25.015 "method": "fsdev_set_opts", 00:04:25.015 "params": { 00:04:25.015 "fsdev_io_pool_size": 65535, 00:04:25.015 "fsdev_io_cache_size": 256 00:04:25.015 } 00:04:25.015 } 00:04:25.015 ] 00:04:25.015 }, 00:04:25.015 { 00:04:25.015 "subsystem": "keyring", 00:04:25.015 "config": [] 00:04:25.015 }, 00:04:25.015 { 00:04:25.015 "subsystem": "iobuf", 00:04:25.015 "config": [ 00:04:25.015 { 00:04:25.015 "method": "iobuf_set_options", 00:04:25.015 "params": { 00:04:25.015 "small_pool_count": 8192, 00:04:25.015 "large_pool_count": 1024, 00:04:25.015 "small_bufsize": 8192, 00:04:25.015 "large_bufsize": 135168, 00:04:25.015 "enable_numa": false 00:04:25.015 } 00:04:25.015 } 00:04:25.015 ] 00:04:25.015 }, 00:04:25.015 { 00:04:25.015 "subsystem": "sock", 00:04:25.015 "config": [ 00:04:25.015 { 00:04:25.015 "method": "sock_set_default_impl", 00:04:25.015 "params": { 00:04:25.015 "impl_name": "uring" 00:04:25.015 } 00:04:25.015 }, 00:04:25.015 { 00:04:25.015 "method": "sock_impl_set_options", 00:04:25.015 "params": { 00:04:25.015 "impl_name": "ssl", 00:04:25.015 "recv_buf_size": 4096, 00:04:25.015 "send_buf_size": 4096, 00:04:25.015 "enable_recv_pipe": true, 00:04:25.015 "enable_quickack": false, 00:04:25.015 "enable_placement_id": 0, 00:04:25.015 "enable_zerocopy_send_server": true, 00:04:25.015 "enable_zerocopy_send_client": false, 00:04:25.015 "zerocopy_threshold": 0, 00:04:25.015 "tls_version": 0, 00:04:25.015 "enable_ktls": false 00:04:25.015 } 00:04:25.015 }, 00:04:25.015 { 00:04:25.015 "method": "sock_impl_set_options", 00:04:25.015 "params": { 00:04:25.015 "impl_name": "posix", 00:04:25.015 "recv_buf_size": 2097152, 00:04:25.015 "send_buf_size": 2097152, 00:04:25.015 "enable_recv_pipe": true, 00:04:25.015 "enable_quickack": false, 00:04:25.015 "enable_placement_id": 0, 00:04:25.015 "enable_zerocopy_send_server": true, 00:04:25.015 "enable_zerocopy_send_client": false, 00:04:25.015 "zerocopy_threshold": 0, 00:04:25.015 "tls_version": 0, 00:04:25.015 "enable_ktls": false 00:04:25.015 } 00:04:25.015 }, 00:04:25.015 { 00:04:25.015 "method": "sock_impl_set_options", 00:04:25.015 "params": { 00:04:25.015 "impl_name": "uring", 00:04:25.015 "recv_buf_size": 2097152, 00:04:25.015 "send_buf_size": 2097152, 00:04:25.015 "enable_recv_pipe": true, 00:04:25.015 "enable_quickack": false, 00:04:25.015 "enable_placement_id": 0, 00:04:25.015 "enable_zerocopy_send_server": false, 00:04:25.015 "enable_zerocopy_send_client": false, 00:04:25.015 "zerocopy_threshold": 0, 00:04:25.015 "tls_version": 0, 00:04:25.015 "enable_ktls": false 00:04:25.015 } 00:04:25.015 } 00:04:25.015 ] 00:04:25.015 }, 00:04:25.015 { 00:04:25.015 "subsystem": "vmd", 00:04:25.015 "config": [] 00:04:25.015 }, 00:04:25.015 { 00:04:25.015 "subsystem": "accel", 00:04:25.015 "config": [ 00:04:25.015 { 00:04:25.015 "method": "accel_set_options", 00:04:25.015 "params": { 00:04:25.015 "small_cache_size": 128, 00:04:25.015 "large_cache_size": 16, 00:04:25.015 "task_count": 2048, 00:04:25.015 "sequence_count": 2048, 00:04:25.015 "buf_count": 2048 00:04:25.015 } 00:04:25.015 } 00:04:25.015 ] 00:04:25.015 }, 00:04:25.015 { 00:04:25.015 "subsystem": "bdev", 00:04:25.015 "config": [ 00:04:25.015 { 00:04:25.015 "method": "bdev_set_options", 00:04:25.015 "params": { 00:04:25.015 "bdev_io_pool_size": 65535, 00:04:25.015 "bdev_io_cache_size": 256, 00:04:25.015 "bdev_auto_examine": true, 00:04:25.015 "iobuf_small_cache_size": 128, 00:04:25.015 "iobuf_large_cache_size": 16 00:04:25.015 } 00:04:25.015 }, 00:04:25.015 { 00:04:25.015 "method": "bdev_raid_set_options", 00:04:25.015 "params": { 00:04:25.015 "process_window_size_kb": 1024, 00:04:25.015 "process_max_bandwidth_mb_sec": 0 00:04:25.015 } 00:04:25.015 }, 00:04:25.015 { 00:04:25.015 "method": "bdev_iscsi_set_options", 00:04:25.015 "params": { 00:04:25.015 "timeout_sec": 30 00:04:25.015 } 00:04:25.015 }, 00:04:25.015 { 00:04:25.015 "method": "bdev_nvme_set_options", 00:04:25.015 "params": { 00:04:25.015 "action_on_timeout": "none", 00:04:25.015 "timeout_us": 0, 00:04:25.015 "timeout_admin_us": 0, 00:04:25.015 "keep_alive_timeout_ms": 10000, 00:04:25.015 "arbitration_burst": 0, 00:04:25.015 "low_priority_weight": 0, 00:04:25.015 "medium_priority_weight": 0, 00:04:25.015 "high_priority_weight": 0, 00:04:25.015 "nvme_adminq_poll_period_us": 10000, 00:04:25.015 "nvme_ioq_poll_period_us": 0, 00:04:25.015 "io_queue_requests": 0, 00:04:25.015 "delay_cmd_submit": true, 00:04:25.015 "transport_retry_count": 4, 00:04:25.015 "bdev_retry_count": 3, 00:04:25.015 "transport_ack_timeout": 0, 00:04:25.015 "ctrlr_loss_timeout_sec": 0, 00:04:25.015 "reconnect_delay_sec": 0, 00:04:25.015 "fast_io_fail_timeout_sec": 0, 00:04:25.015 "disable_auto_failback": false, 00:04:25.015 "generate_uuids": false, 00:04:25.015 "transport_tos": 0, 00:04:25.015 "nvme_error_stat": false, 00:04:25.015 "rdma_srq_size": 0, 00:04:25.015 "io_path_stat": false, 00:04:25.015 "allow_accel_sequence": false, 00:04:25.015 "rdma_max_cq_size": 0, 00:04:25.015 "rdma_cm_event_timeout_ms": 0, 00:04:25.015 "dhchap_digests": [ 00:04:25.015 "sha256", 00:04:25.015 "sha384", 00:04:25.015 "sha512" 00:04:25.015 ], 00:04:25.015 "dhchap_dhgroups": [ 00:04:25.015 "null", 00:04:25.015 "ffdhe2048", 00:04:25.015 "ffdhe3072", 00:04:25.015 "ffdhe4096", 00:04:25.015 "ffdhe6144", 00:04:25.015 "ffdhe8192" 00:04:25.015 ] 00:04:25.015 } 00:04:25.015 }, 00:04:25.015 { 00:04:25.015 "method": "bdev_nvme_set_hotplug", 00:04:25.015 "params": { 00:04:25.015 "period_us": 100000, 00:04:25.016 "enable": false 00:04:25.016 } 00:04:25.016 }, 00:04:25.016 { 00:04:25.016 "method": "bdev_wait_for_examine" 00:04:25.016 } 00:04:25.016 ] 00:04:25.016 }, 00:04:25.016 { 00:04:25.016 "subsystem": "scsi", 00:04:25.016 "config": null 00:04:25.016 }, 00:04:25.016 { 00:04:25.016 "subsystem": "scheduler", 00:04:25.016 "config": [ 00:04:25.016 { 00:04:25.016 "method": "framework_set_scheduler", 00:04:25.016 "params": { 00:04:25.016 "name": "static" 00:04:25.016 } 00:04:25.016 } 00:04:25.016 ] 00:04:25.016 }, 00:04:25.016 { 00:04:25.016 "subsystem": "vhost_scsi", 00:04:25.016 "config": [] 00:04:25.016 }, 00:04:25.016 { 00:04:25.016 "subsystem": "vhost_blk", 00:04:25.016 "config": [] 00:04:25.016 }, 00:04:25.016 { 00:04:25.016 "subsystem": "ublk", 00:04:25.016 "config": [] 00:04:25.016 }, 00:04:25.016 { 00:04:25.016 "subsystem": "nbd", 00:04:25.016 "config": [] 00:04:25.016 }, 00:04:25.016 { 00:04:25.016 "subsystem": "nvmf", 00:04:25.016 "config": [ 00:04:25.016 { 00:04:25.016 "method": "nvmf_set_config", 00:04:25.016 "params": { 00:04:25.016 "discovery_filter": "match_any", 00:04:25.016 "admin_cmd_passthru": { 00:04:25.016 "identify_ctrlr": false 00:04:25.016 }, 00:04:25.016 "dhchap_digests": [ 00:04:25.016 "sha256", 00:04:25.016 "sha384", 00:04:25.016 "sha512" 00:04:25.016 ], 00:04:25.016 "dhchap_dhgroups": [ 00:04:25.016 "null", 00:04:25.016 "ffdhe2048", 00:04:25.016 "ffdhe3072", 00:04:25.016 "ffdhe4096", 00:04:25.016 "ffdhe6144", 00:04:25.016 "ffdhe8192" 00:04:25.016 ] 00:04:25.016 } 00:04:25.016 }, 00:04:25.016 { 00:04:25.016 "method": "nvmf_set_max_subsystems", 00:04:25.016 "params": { 00:04:25.016 "max_subsystems": 1024 00:04:25.016 } 00:04:25.016 }, 00:04:25.016 { 00:04:25.016 "method": "nvmf_set_crdt", 00:04:25.016 "params": { 00:04:25.016 "crdt1": 0, 00:04:25.016 "crdt2": 0, 00:04:25.016 "crdt3": 0 00:04:25.016 } 00:04:25.016 }, 00:04:25.016 { 00:04:25.016 "method": "nvmf_create_transport", 00:04:25.016 "params": { 00:04:25.016 "trtype": "TCP", 00:04:25.016 "max_queue_depth": 128, 00:04:25.016 "max_io_qpairs_per_ctrlr": 127, 00:04:25.016 "in_capsule_data_size": 4096, 00:04:25.016 "max_io_size": 131072, 00:04:25.016 "io_unit_size": 131072, 00:04:25.016 "max_aq_depth": 128, 00:04:25.016 "num_shared_buffers": 511, 00:04:25.016 "buf_cache_size": 4294967295, 00:04:25.016 "dif_insert_or_strip": false, 00:04:25.016 "zcopy": false, 00:04:25.016 "c2h_success": true, 00:04:25.016 "sock_priority": 0, 00:04:25.016 "abort_timeout_sec": 1, 00:04:25.016 "ack_timeout": 0, 00:04:25.016 "data_wr_pool_size": 0 00:04:25.016 } 00:04:25.016 } 00:04:25.016 ] 00:04:25.016 }, 00:04:25.016 { 00:04:25.016 "subsystem": "iscsi", 00:04:25.016 "config": [ 00:04:25.016 { 00:04:25.016 "method": "iscsi_set_options", 00:04:25.016 "params": { 00:04:25.016 "node_base": "iqn.2016-06.io.spdk", 00:04:25.016 "max_sessions": 128, 00:04:25.016 "max_connections_per_session": 2, 00:04:25.016 "max_queue_depth": 64, 00:04:25.016 "default_time2wait": 2, 00:04:25.016 "default_time2retain": 20, 00:04:25.016 "first_burst_length": 8192, 00:04:25.016 "immediate_data": true, 00:04:25.016 "allow_duplicated_isid": false, 00:04:25.016 "error_recovery_level": 0, 00:04:25.016 "nop_timeout": 60, 00:04:25.016 "nop_in_interval": 30, 00:04:25.016 "disable_chap": false, 00:04:25.016 "require_chap": false, 00:04:25.016 "mutual_chap": false, 00:04:25.016 "chap_group": 0, 00:04:25.016 "max_large_datain_per_connection": 64, 00:04:25.016 "max_r2t_per_connection": 4, 00:04:25.016 "pdu_pool_size": 36864, 00:04:25.016 "immediate_data_pool_size": 16384, 00:04:25.016 "data_out_pool_size": 2048 00:04:25.016 } 00:04:25.016 } 00:04:25.016 ] 00:04:25.016 } 00:04:25.016 ] 00:04:25.016 } 00:04:25.016 10:22:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:25.016 10:22:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 56985 00:04:25.016 10:22:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 56985 ']' 00:04:25.016 10:22:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 56985 00:04:25.016 10:22:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:04:25.016 10:22:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:25.016 10:22:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 56985 00:04:25.016 10:22:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:25.016 killing process with pid 56985 00:04:25.016 10:22:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:25.016 10:22:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 56985' 00:04:25.016 10:22:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 56985 00:04:25.016 10:22:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 56985 00:04:25.580 10:22:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:25.580 10:22:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57018 00:04:25.580 10:22:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:30.850 10:22:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57018 00:04:30.850 10:22:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 57018 ']' 00:04:30.850 10:22:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 57018 00:04:30.850 10:22:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:04:30.850 10:22:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:30.850 10:22:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57018 00:04:30.850 10:22:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:30.850 10:22:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:30.850 killing process with pid 57018 00:04:30.850 10:22:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57018' 00:04:30.850 10:22:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 57018 00:04:30.850 10:22:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 57018 00:04:30.850 10:22:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:30.850 10:22:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:30.850 00:04:30.850 real 0m7.106s 00:04:30.850 user 0m6.869s 00:04:30.850 sys 0m0.675s 00:04:30.850 10:22:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:30.850 10:22:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:30.850 ************************************ 00:04:30.850 END TEST skip_rpc_with_json 00:04:30.850 ************************************ 00:04:30.850 10:22:31 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:30.850 10:22:31 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:30.850 10:22:31 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:30.850 10:22:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.850 ************************************ 00:04:30.850 START TEST skip_rpc_with_delay 00:04:30.850 ************************************ 00:04:30.850 10:22:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:04:30.850 10:22:31 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:30.850 10:22:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:30.850 10:22:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:30.850 10:22:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:30.850 10:22:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:30.850 10:22:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:30.850 10:22:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:30.850 10:22:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:30.850 10:22:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:30.850 10:22:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:30.850 10:22:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:30.850 10:22:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:31.109 [2024-11-15 10:22:31.707249] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:31.109 10:22:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:31.109 10:22:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:31.109 10:22:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:31.109 10:22:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:31.109 00:04:31.109 real 0m0.117s 00:04:31.109 user 0m0.075s 00:04:31.109 sys 0m0.040s 00:04:31.109 10:22:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:31.109 10:22:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:31.109 ************************************ 00:04:31.109 END TEST skip_rpc_with_delay 00:04:31.109 ************************************ 00:04:31.109 10:22:31 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:31.109 10:22:31 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:31.109 10:22:31 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:31.109 10:22:31 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:31.109 10:22:31 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:31.109 10:22:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.109 ************************************ 00:04:31.109 START TEST exit_on_failed_rpc_init 00:04:31.109 ************************************ 00:04:31.109 10:22:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:04:31.109 10:22:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57124 00:04:31.109 10:22:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:31.109 10:22:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57124 00:04:31.109 10:22:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 57124 ']' 00:04:31.109 10:22:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:31.109 10:22:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:31.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:31.109 10:22:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:31.109 10:22:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:31.109 10:22:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:31.109 [2024-11-15 10:22:31.850094] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:04:31.109 [2024-11-15 10:22:31.850198] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57124 ] 00:04:31.368 [2024-11-15 10:22:31.995601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.368 [2024-11-15 10:22:32.057436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.368 [2024-11-15 10:22:32.129023] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:31.626 10:22:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:31.626 10:22:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:04:31.626 10:22:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:31.626 10:22:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:31.626 10:22:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:31.626 10:22:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:31.626 10:22:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:31.626 10:22:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:31.626 10:22:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:31.626 10:22:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:31.626 10:22:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:31.626 10:22:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:31.626 10:22:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:31.626 10:22:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:31.626 10:22:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:31.626 [2024-11-15 10:22:32.390496] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:04:31.627 [2024-11-15 10:22:32.390608] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57138 ] 00:04:31.885 [2024-11-15 10:22:32.537943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.885 [2024-11-15 10:22:32.602301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:31.885 [2024-11-15 10:22:32.602408] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:31.885 [2024-11-15 10:22:32.602426] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:31.885 [2024-11-15 10:22:32.602436] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:31.885 10:22:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:31.885 10:22:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:31.885 10:22:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:31.885 10:22:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:31.885 10:22:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:31.885 10:22:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:31.885 10:22:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:31.885 10:22:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57124 00:04:31.885 10:22:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 57124 ']' 00:04:31.885 10:22:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 57124 00:04:31.885 10:22:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:04:31.885 10:22:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:31.885 10:22:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57124 00:04:31.885 10:22:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:31.885 10:22:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:31.885 killing process with pid 57124 00:04:31.885 10:22:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57124' 00:04:31.885 10:22:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 57124 00:04:31.885 10:22:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 57124 00:04:32.452 00:04:32.452 real 0m1.313s 00:04:32.452 user 0m1.422s 00:04:32.452 sys 0m0.362s 00:04:32.452 10:22:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:32.452 10:22:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:32.452 ************************************ 00:04:32.452 END TEST exit_on_failed_rpc_init 00:04:32.452 ************************************ 00:04:32.452 10:22:33 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:32.452 00:04:32.452 real 0m14.335s 00:04:32.452 user 0m13.557s 00:04:32.452 sys 0m1.585s 00:04:32.452 10:22:33 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:32.452 10:22:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.452 ************************************ 00:04:32.452 END TEST skip_rpc 00:04:32.452 ************************************ 00:04:32.452 10:22:33 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:32.452 10:22:33 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:32.452 10:22:33 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:32.452 10:22:33 -- common/autotest_common.sh@10 -- # set +x 00:04:32.452 ************************************ 00:04:32.452 START TEST rpc_client 00:04:32.452 ************************************ 00:04:32.452 10:22:33 rpc_client -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:32.452 * Looking for test storage... 00:04:32.452 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:32.452 10:22:33 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:32.452 10:22:33 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:04:32.452 10:22:33 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:32.711 10:22:33 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:32.711 10:22:33 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:32.711 10:22:33 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:32.711 10:22:33 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:32.711 10:22:33 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:32.711 10:22:33 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:32.711 10:22:33 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:32.711 10:22:33 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:32.711 10:22:33 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:32.711 10:22:33 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:32.711 10:22:33 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:32.711 10:22:33 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:32.711 10:22:33 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:32.711 10:22:33 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:32.711 10:22:33 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:32.711 10:22:33 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:32.711 10:22:33 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:32.711 10:22:33 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:32.711 10:22:33 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:32.711 10:22:33 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:32.711 10:22:33 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:32.711 10:22:33 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:32.711 10:22:33 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:32.711 10:22:33 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:32.711 10:22:33 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:32.711 10:22:33 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:32.711 10:22:33 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:32.711 10:22:33 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:32.711 10:22:33 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:32.711 10:22:33 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:32.711 10:22:33 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:32.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.711 --rc genhtml_branch_coverage=1 00:04:32.711 --rc genhtml_function_coverage=1 00:04:32.711 --rc genhtml_legend=1 00:04:32.711 --rc geninfo_all_blocks=1 00:04:32.711 --rc geninfo_unexecuted_blocks=1 00:04:32.711 00:04:32.711 ' 00:04:32.711 10:22:33 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:32.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.711 --rc genhtml_branch_coverage=1 00:04:32.711 --rc genhtml_function_coverage=1 00:04:32.711 --rc genhtml_legend=1 00:04:32.711 --rc geninfo_all_blocks=1 00:04:32.711 --rc geninfo_unexecuted_blocks=1 00:04:32.711 00:04:32.711 ' 00:04:32.711 10:22:33 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:32.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.711 --rc genhtml_branch_coverage=1 00:04:32.711 --rc genhtml_function_coverage=1 00:04:32.711 --rc genhtml_legend=1 00:04:32.711 --rc geninfo_all_blocks=1 00:04:32.711 --rc geninfo_unexecuted_blocks=1 00:04:32.711 00:04:32.711 ' 00:04:32.711 10:22:33 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:32.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.711 --rc genhtml_branch_coverage=1 00:04:32.711 --rc genhtml_function_coverage=1 00:04:32.711 --rc genhtml_legend=1 00:04:32.711 --rc geninfo_all_blocks=1 00:04:32.711 --rc geninfo_unexecuted_blocks=1 00:04:32.711 00:04:32.711 ' 00:04:32.711 10:22:33 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:32.711 OK 00:04:32.711 10:22:33 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:32.711 00:04:32.711 real 0m0.205s 00:04:32.711 user 0m0.136s 00:04:32.711 sys 0m0.078s 00:04:32.711 10:22:33 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:32.711 ************************************ 00:04:32.711 END TEST rpc_client 00:04:32.711 ************************************ 00:04:32.711 10:22:33 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:32.711 10:22:33 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:32.711 10:22:33 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:32.711 10:22:33 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:32.711 10:22:33 -- common/autotest_common.sh@10 -- # set +x 00:04:32.711 ************************************ 00:04:32.711 START TEST json_config 00:04:32.711 ************************************ 00:04:32.712 10:22:33 json_config -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:32.712 10:22:33 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:32.712 10:22:33 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:04:32.712 10:22:33 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:32.976 10:22:33 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:32.976 10:22:33 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:32.976 10:22:33 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:32.976 10:22:33 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:32.976 10:22:33 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:32.976 10:22:33 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:32.976 10:22:33 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:32.976 10:22:33 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:32.976 10:22:33 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:32.976 10:22:33 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:32.976 10:22:33 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:32.976 10:22:33 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:32.976 10:22:33 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:32.976 10:22:33 json_config -- scripts/common.sh@345 -- # : 1 00:04:32.976 10:22:33 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:32.976 10:22:33 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:32.976 10:22:33 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:32.976 10:22:33 json_config -- scripts/common.sh@353 -- # local d=1 00:04:32.976 10:22:33 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:32.976 10:22:33 json_config -- scripts/common.sh@355 -- # echo 1 00:04:32.976 10:22:33 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:32.976 10:22:33 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:32.976 10:22:33 json_config -- scripts/common.sh@353 -- # local d=2 00:04:32.976 10:22:33 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:32.976 10:22:33 json_config -- scripts/common.sh@355 -- # echo 2 00:04:32.976 10:22:33 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:32.976 10:22:33 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:32.976 10:22:33 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:32.976 10:22:33 json_config -- scripts/common.sh@368 -- # return 0 00:04:32.976 10:22:33 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:32.976 10:22:33 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:32.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.976 --rc genhtml_branch_coverage=1 00:04:32.976 --rc genhtml_function_coverage=1 00:04:32.976 --rc genhtml_legend=1 00:04:32.976 --rc geninfo_all_blocks=1 00:04:32.976 --rc geninfo_unexecuted_blocks=1 00:04:32.976 00:04:32.976 ' 00:04:32.976 10:22:33 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:32.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.976 --rc genhtml_branch_coverage=1 00:04:32.976 --rc genhtml_function_coverage=1 00:04:32.976 --rc genhtml_legend=1 00:04:32.976 --rc geninfo_all_blocks=1 00:04:32.976 --rc geninfo_unexecuted_blocks=1 00:04:32.976 00:04:32.976 ' 00:04:32.976 10:22:33 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:32.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.976 --rc genhtml_branch_coverage=1 00:04:32.976 --rc genhtml_function_coverage=1 00:04:32.976 --rc genhtml_legend=1 00:04:32.976 --rc geninfo_all_blocks=1 00:04:32.976 --rc geninfo_unexecuted_blocks=1 00:04:32.976 00:04:32.976 ' 00:04:32.976 10:22:33 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:32.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.976 --rc genhtml_branch_coverage=1 00:04:32.976 --rc genhtml_function_coverage=1 00:04:32.976 --rc genhtml_legend=1 00:04:32.976 --rc geninfo_all_blocks=1 00:04:32.976 --rc geninfo_unexecuted_blocks=1 00:04:32.976 00:04:32.976 ' 00:04:32.976 10:22:33 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:32.976 10:22:33 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:32.976 10:22:33 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:32.976 10:22:33 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:32.976 10:22:33 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:32.976 10:22:33 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:32.976 10:22:33 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:32.976 10:22:33 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:32.976 10:22:33 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:32.976 10:22:33 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:32.976 10:22:33 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:32.976 10:22:33 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:32.976 10:22:33 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:04:32.976 10:22:33 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:04:32.976 10:22:33 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:32.976 10:22:33 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:32.976 10:22:33 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:32.976 10:22:33 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:32.976 10:22:33 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:32.976 10:22:33 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:32.976 10:22:33 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:32.976 10:22:33 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:32.976 10:22:33 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:32.976 10:22:33 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.976 10:22:33 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.976 10:22:33 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.976 10:22:33 json_config -- paths/export.sh@5 -- # export PATH 00:04:32.976 10:22:33 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.976 10:22:33 json_config -- nvmf/common.sh@51 -- # : 0 00:04:32.976 10:22:33 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:32.976 10:22:33 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:32.976 10:22:33 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:32.976 10:22:33 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:32.976 10:22:33 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:32.976 10:22:33 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:32.976 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:32.976 10:22:33 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:32.976 10:22:33 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:32.976 10:22:33 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:32.976 10:22:33 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:32.976 10:22:33 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:32.976 10:22:33 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:32.976 10:22:33 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:32.977 10:22:33 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:32.977 10:22:33 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:32.977 10:22:33 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:32.977 10:22:33 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:32.977 10:22:33 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:32.977 10:22:33 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:32.977 10:22:33 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:32.977 10:22:33 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:04:32.977 10:22:33 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:32.977 10:22:33 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:32.977 10:22:33 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:32.977 INFO: JSON configuration test init 00:04:32.977 10:22:33 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:32.977 10:22:33 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:32.977 10:22:33 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:32.977 10:22:33 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:32.977 10:22:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.977 10:22:33 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:32.977 10:22:33 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:32.977 10:22:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.977 10:22:33 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:32.977 10:22:33 json_config -- json_config/common.sh@9 -- # local app=target 00:04:32.977 10:22:33 json_config -- json_config/common.sh@10 -- # shift 00:04:32.977 10:22:33 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:32.977 10:22:33 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:32.977 10:22:33 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:32.977 10:22:33 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:32.977 10:22:33 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:32.977 10:22:33 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57272 00:04:32.977 10:22:33 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:32.977 Waiting for target to run... 00:04:32.977 10:22:33 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:32.977 10:22:33 json_config -- json_config/common.sh@25 -- # waitforlisten 57272 /var/tmp/spdk_tgt.sock 00:04:32.977 10:22:33 json_config -- common/autotest_common.sh@833 -- # '[' -z 57272 ']' 00:04:32.977 10:22:33 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:32.977 10:22:33 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:32.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:32.977 10:22:33 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:32.977 10:22:33 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:32.977 10:22:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.977 [2024-11-15 10:22:33.699503] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:04:32.977 [2024-11-15 10:22:33.699619] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57272 ] 00:04:33.543 [2024-11-15 10:22:34.113094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.543 [2024-11-15 10:22:34.162445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.114 10:22:34 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:34.114 10:22:34 json_config -- common/autotest_common.sh@866 -- # return 0 00:04:34.114 00:04:34.114 10:22:34 json_config -- json_config/common.sh@26 -- # echo '' 00:04:34.114 10:22:34 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:34.114 10:22:34 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:34.114 10:22:34 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:34.114 10:22:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:34.114 10:22:34 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:34.114 10:22:34 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:34.114 10:22:34 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:34.114 10:22:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:34.115 10:22:34 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:34.115 10:22:34 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:34.115 10:22:34 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:34.375 [2024-11-15 10:22:35.119254] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:34.633 10:22:35 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:34.633 10:22:35 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:34.633 10:22:35 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:34.633 10:22:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:34.633 10:22:35 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:34.633 10:22:35 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:34.633 10:22:35 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:34.633 10:22:35 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:34.633 10:22:35 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:34.633 10:22:35 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:34.633 10:22:35 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:34.633 10:22:35 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:34.892 10:22:35 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:34.892 10:22:35 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:34.892 10:22:35 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:34.892 10:22:35 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:34.892 10:22:35 json_config -- json_config/json_config.sh@54 -- # sort 00:04:34.892 10:22:35 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:34.892 10:22:35 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:34.892 10:22:35 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:34.892 10:22:35 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:34.892 10:22:35 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:34.892 10:22:35 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:34.892 10:22:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:34.892 10:22:35 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:34.892 10:22:35 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:34.892 10:22:35 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:34.892 10:22:35 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:34.892 10:22:35 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:34.892 10:22:35 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:34.892 10:22:35 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:34.892 10:22:35 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:34.892 10:22:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:34.892 10:22:35 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:34.892 10:22:35 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:34.892 10:22:35 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:34.892 10:22:35 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:34.892 10:22:35 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:35.151 MallocForNvmf0 00:04:35.151 10:22:35 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:35.151 10:22:35 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:35.719 MallocForNvmf1 00:04:35.719 10:22:36 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:35.719 10:22:36 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:35.719 [2024-11-15 10:22:36.544409] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:35.719 10:22:36 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:35.719 10:22:36 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:35.977 10:22:36 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:35.977 10:22:36 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:36.236 10:22:37 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:36.236 10:22:37 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:36.803 10:22:37 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:36.803 10:22:37 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:36.803 [2024-11-15 10:22:37.612984] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:36.803 10:22:37 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:36.803 10:22:37 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:36.803 10:22:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:37.067 10:22:37 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:37.068 10:22:37 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:37.068 10:22:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:37.068 10:22:37 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:37.068 10:22:37 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:37.068 10:22:37 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:37.339 MallocBdevForConfigChangeCheck 00:04:37.339 10:22:37 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:37.339 10:22:37 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:37.339 10:22:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:37.339 10:22:38 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:37.339 10:22:38 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:37.597 INFO: shutting down applications... 00:04:37.597 10:22:38 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:37.597 10:22:38 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:37.598 10:22:38 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:37.598 10:22:38 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:37.598 10:22:38 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:38.164 Calling clear_iscsi_subsystem 00:04:38.164 Calling clear_nvmf_subsystem 00:04:38.164 Calling clear_nbd_subsystem 00:04:38.164 Calling clear_ublk_subsystem 00:04:38.164 Calling clear_vhost_blk_subsystem 00:04:38.164 Calling clear_vhost_scsi_subsystem 00:04:38.164 Calling clear_bdev_subsystem 00:04:38.164 10:22:38 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:04:38.164 10:22:38 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:38.164 10:22:38 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:38.164 10:22:38 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:38.164 10:22:38 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:38.164 10:22:38 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:04:38.423 10:22:39 json_config -- json_config/json_config.sh@352 -- # break 00:04:38.423 10:22:39 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:38.423 10:22:39 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:38.423 10:22:39 json_config -- json_config/common.sh@31 -- # local app=target 00:04:38.423 10:22:39 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:38.423 10:22:39 json_config -- json_config/common.sh@35 -- # [[ -n 57272 ]] 00:04:38.423 10:22:39 json_config -- json_config/common.sh@38 -- # kill -SIGINT 57272 00:04:38.423 10:22:39 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:38.423 10:22:39 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:38.423 10:22:39 json_config -- json_config/common.sh@41 -- # kill -0 57272 00:04:38.423 10:22:39 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:38.990 10:22:39 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:38.991 10:22:39 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:38.991 10:22:39 json_config -- json_config/common.sh@41 -- # kill -0 57272 00:04:38.991 10:22:39 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:38.991 10:22:39 json_config -- json_config/common.sh@43 -- # break 00:04:38.991 10:22:39 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:38.991 SPDK target shutdown done 00:04:38.991 10:22:39 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:38.991 INFO: relaunching applications... 00:04:38.991 10:22:39 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:38.991 10:22:39 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:38.991 10:22:39 json_config -- json_config/common.sh@9 -- # local app=target 00:04:38.991 10:22:39 json_config -- json_config/common.sh@10 -- # shift 00:04:38.991 10:22:39 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:38.991 10:22:39 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:38.991 10:22:39 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:38.991 10:22:39 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:38.991 10:22:39 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:38.991 10:22:39 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57473 00:04:38.991 Waiting for target to run... 00:04:38.991 10:22:39 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:38.991 10:22:39 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:38.991 10:22:39 json_config -- json_config/common.sh@25 -- # waitforlisten 57473 /var/tmp/spdk_tgt.sock 00:04:38.991 10:22:39 json_config -- common/autotest_common.sh@833 -- # '[' -z 57473 ']' 00:04:38.991 10:22:39 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:38.991 10:22:39 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:38.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:38.991 10:22:39 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:38.991 10:22:39 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:38.991 10:22:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:39.250 [2024-11-15 10:22:39.848516] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:04:39.250 [2024-11-15 10:22:39.848620] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57473 ] 00:04:39.509 [2024-11-15 10:22:40.301303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.509 [2024-11-15 10:22:40.351736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.768 [2024-11-15 10:22:40.490656] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:40.028 [2024-11-15 10:22:40.709398] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:40.028 [2024-11-15 10:22:40.741499] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:40.028 10:22:40 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:40.028 10:22:40 json_config -- common/autotest_common.sh@866 -- # return 0 00:04:40.028 00:04:40.028 10:22:40 json_config -- json_config/common.sh@26 -- # echo '' 00:04:40.028 10:22:40 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:40.028 INFO: Checking if target configuration is the same... 00:04:40.028 10:22:40 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:40.028 10:22:40 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:40.028 10:22:40 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:40.028 10:22:40 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:40.028 + '[' 2 -ne 2 ']' 00:04:40.028 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:40.028 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:40.028 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:40.028 +++ basename /dev/fd/62 00:04:40.028 ++ mktemp /tmp/62.XXX 00:04:40.028 + tmp_file_1=/tmp/62.Fzz 00:04:40.028 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:40.028 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:40.028 + tmp_file_2=/tmp/spdk_tgt_config.json.UJJ 00:04:40.028 + ret=0 00:04:40.028 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:40.595 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:40.595 + diff -u /tmp/62.Fzz /tmp/spdk_tgt_config.json.UJJ 00:04:40.595 + echo 'INFO: JSON config files are the same' 00:04:40.595 INFO: JSON config files are the same 00:04:40.595 + rm /tmp/62.Fzz /tmp/spdk_tgt_config.json.UJJ 00:04:40.595 + exit 0 00:04:40.595 10:22:41 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:40.595 10:22:41 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:40.595 INFO: changing configuration and checking if this can be detected... 00:04:40.595 10:22:41 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:40.595 10:22:41 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:40.854 10:22:41 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:40.854 10:22:41 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:40.854 10:22:41 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:40.854 + '[' 2 -ne 2 ']' 00:04:40.854 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:40.854 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:40.854 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:40.854 +++ basename /dev/fd/62 00:04:40.854 ++ mktemp /tmp/62.XXX 00:04:40.854 + tmp_file_1=/tmp/62.6CY 00:04:40.854 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:40.854 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:40.854 + tmp_file_2=/tmp/spdk_tgt_config.json.Nsm 00:04:40.854 + ret=0 00:04:40.854 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:41.421 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:41.421 + diff -u /tmp/62.6CY /tmp/spdk_tgt_config.json.Nsm 00:04:41.421 + ret=1 00:04:41.421 + echo '=== Start of file: /tmp/62.6CY ===' 00:04:41.421 + cat /tmp/62.6CY 00:04:41.421 + echo '=== End of file: /tmp/62.6CY ===' 00:04:41.421 + echo '' 00:04:41.421 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Nsm ===' 00:04:41.421 + cat /tmp/spdk_tgt_config.json.Nsm 00:04:41.421 + echo '=== End of file: /tmp/spdk_tgt_config.json.Nsm ===' 00:04:41.421 + echo '' 00:04:41.421 + rm /tmp/62.6CY /tmp/spdk_tgt_config.json.Nsm 00:04:41.421 + exit 1 00:04:41.421 INFO: configuration change detected. 00:04:41.421 10:22:42 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:41.421 10:22:42 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:41.421 10:22:42 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:41.421 10:22:42 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:41.421 10:22:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:41.421 10:22:42 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:41.421 10:22:42 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:41.421 10:22:42 json_config -- json_config/json_config.sh@324 -- # [[ -n 57473 ]] 00:04:41.421 10:22:42 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:41.421 10:22:42 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:41.422 10:22:42 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:41.422 10:22:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:41.422 10:22:42 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:41.422 10:22:42 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:41.422 10:22:42 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:41.422 10:22:42 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:41.422 10:22:42 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:41.422 10:22:42 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:41.422 10:22:42 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:41.422 10:22:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:41.422 10:22:42 json_config -- json_config/json_config.sh@330 -- # killprocess 57473 00:04:41.422 10:22:42 json_config -- common/autotest_common.sh@952 -- # '[' -z 57473 ']' 00:04:41.422 10:22:42 json_config -- common/autotest_common.sh@956 -- # kill -0 57473 00:04:41.422 10:22:42 json_config -- common/autotest_common.sh@957 -- # uname 00:04:41.422 10:22:42 json_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:41.422 10:22:42 json_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57473 00:04:41.422 10:22:42 json_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:41.422 10:22:42 json_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:41.422 killing process with pid 57473 00:04:41.422 10:22:42 json_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57473' 00:04:41.422 10:22:42 json_config -- common/autotest_common.sh@971 -- # kill 57473 00:04:41.422 10:22:42 json_config -- common/autotest_common.sh@976 -- # wait 57473 00:04:41.680 10:22:42 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:41.680 10:22:42 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:41.680 10:22:42 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:41.680 10:22:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:41.681 10:22:42 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:41.681 INFO: Success 00:04:41.681 10:22:42 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:41.681 00:04:41.681 real 0m9.022s 00:04:41.681 user 0m13.114s 00:04:41.681 sys 0m1.813s 00:04:41.681 10:22:42 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:41.681 10:22:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:41.681 ************************************ 00:04:41.681 END TEST json_config 00:04:41.681 ************************************ 00:04:41.681 10:22:42 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:41.681 10:22:42 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:41.681 10:22:42 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:41.681 10:22:42 -- common/autotest_common.sh@10 -- # set +x 00:04:41.681 ************************************ 00:04:41.681 START TEST json_config_extra_key 00:04:41.681 ************************************ 00:04:41.681 10:22:42 json_config_extra_key -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:41.939 10:22:42 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:41.939 10:22:42 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:04:41.939 10:22:42 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:41.940 10:22:42 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:41.940 10:22:42 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:41.940 10:22:42 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:41.940 10:22:42 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:41.940 10:22:42 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.940 10:22:42 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:41.940 10:22:42 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:41.940 10:22:42 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:41.940 10:22:42 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:41.940 10:22:42 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:41.940 10:22:42 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:41.940 10:22:42 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:41.940 10:22:42 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:41.940 10:22:42 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:41.940 10:22:42 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:41.940 10:22:42 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.940 10:22:42 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:41.940 10:22:42 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:41.940 10:22:42 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.940 10:22:42 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:41.940 10:22:42 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:41.940 10:22:42 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:41.940 10:22:42 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:41.940 10:22:42 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.940 10:22:42 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:41.940 10:22:42 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:41.940 10:22:42 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:41.940 10:22:42 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:41.940 10:22:42 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:41.940 10:22:42 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.940 10:22:42 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:41.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.940 --rc genhtml_branch_coverage=1 00:04:41.940 --rc genhtml_function_coverage=1 00:04:41.940 --rc genhtml_legend=1 00:04:41.940 --rc geninfo_all_blocks=1 00:04:41.940 --rc geninfo_unexecuted_blocks=1 00:04:41.940 00:04:41.940 ' 00:04:41.940 10:22:42 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:41.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.940 --rc genhtml_branch_coverage=1 00:04:41.940 --rc genhtml_function_coverage=1 00:04:41.940 --rc genhtml_legend=1 00:04:41.940 --rc geninfo_all_blocks=1 00:04:41.940 --rc geninfo_unexecuted_blocks=1 00:04:41.940 00:04:41.940 ' 00:04:41.940 10:22:42 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:41.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.940 --rc genhtml_branch_coverage=1 00:04:41.940 --rc genhtml_function_coverage=1 00:04:41.940 --rc genhtml_legend=1 00:04:41.940 --rc geninfo_all_blocks=1 00:04:41.940 --rc geninfo_unexecuted_blocks=1 00:04:41.940 00:04:41.940 ' 00:04:41.940 10:22:42 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:41.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.940 --rc genhtml_branch_coverage=1 00:04:41.940 --rc genhtml_function_coverage=1 00:04:41.940 --rc genhtml_legend=1 00:04:41.940 --rc geninfo_all_blocks=1 00:04:41.940 --rc geninfo_unexecuted_blocks=1 00:04:41.940 00:04:41.940 ' 00:04:41.940 10:22:42 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:41.940 10:22:42 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:41.940 10:22:42 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:41.940 10:22:42 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:41.940 10:22:42 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:41.940 10:22:42 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:41.940 10:22:42 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:41.940 10:22:42 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:41.940 10:22:42 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:41.940 10:22:42 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:41.940 10:22:42 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:41.940 10:22:42 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:41.940 10:22:42 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:04:41.940 10:22:42 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:04:41.940 10:22:42 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:41.940 10:22:42 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:41.940 10:22:42 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:41.940 10:22:42 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:41.940 10:22:42 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:41.940 10:22:42 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:41.940 10:22:42 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:41.940 10:22:42 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:41.940 10:22:42 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:41.940 10:22:42 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.940 10:22:42 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.940 10:22:42 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.940 10:22:42 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:41.940 10:22:42 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.940 10:22:42 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:41.940 10:22:42 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:41.940 10:22:42 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:41.940 10:22:42 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:41.940 10:22:42 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:41.940 10:22:42 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:41.940 10:22:42 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:41.940 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:41.940 10:22:42 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:41.940 10:22:42 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:41.940 10:22:42 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:41.940 10:22:42 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:41.940 10:22:42 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:41.940 10:22:42 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:41.940 10:22:42 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:41.940 10:22:42 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:41.940 10:22:42 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:41.940 10:22:42 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:41.940 10:22:42 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:41.940 10:22:42 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:41.940 10:22:42 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:41.940 INFO: launching applications... 00:04:41.940 10:22:42 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:41.940 10:22:42 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:41.940 10:22:42 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:41.940 10:22:42 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:41.940 10:22:42 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:41.940 10:22:42 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:41.940 10:22:42 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:41.940 10:22:42 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:41.940 10:22:42 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:41.940 10:22:42 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57628 00:04:41.940 Waiting for target to run... 00:04:41.940 10:22:42 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:41.940 10:22:42 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57628 /var/tmp/spdk_tgt.sock 00:04:41.940 10:22:42 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 57628 ']' 00:04:41.940 10:22:42 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:41.940 10:22:42 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:41.940 10:22:42 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:41.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:41.940 10:22:42 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:41.940 10:22:42 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:41.940 10:22:42 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:41.940 [2024-11-15 10:22:42.747136] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:04:41.940 [2024-11-15 10:22:42.747250] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57628 ] 00:04:42.514 [2024-11-15 10:22:43.171804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.514 [2024-11-15 10:22:43.219538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.514 [2024-11-15 10:22:43.251646] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:43.112 10:22:43 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:43.112 10:22:43 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:04:43.112 00:04:43.112 10:22:43 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:43.112 INFO: shutting down applications... 00:04:43.112 10:22:43 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:43.112 10:22:43 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:43.112 10:22:43 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:43.112 10:22:43 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:43.112 10:22:43 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57628 ]] 00:04:43.112 10:22:43 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57628 00:04:43.112 10:22:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:43.112 10:22:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:43.112 10:22:43 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57628 00:04:43.112 10:22:43 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:43.371 10:22:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:43.371 10:22:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:43.371 10:22:44 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57628 00:04:43.371 10:22:44 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:43.371 10:22:44 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:43.371 10:22:44 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:43.371 10:22:44 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:43.371 SPDK target shutdown done 00:04:43.371 Success 00:04:43.371 10:22:44 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:43.371 00:04:43.371 real 0m1.707s 00:04:43.371 user 0m1.568s 00:04:43.371 sys 0m0.449s 00:04:43.371 10:22:44 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:43.371 10:22:44 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:43.371 ************************************ 00:04:43.371 END TEST json_config_extra_key 00:04:43.371 ************************************ 00:04:43.630 10:22:44 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:43.630 10:22:44 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:43.630 10:22:44 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:43.630 10:22:44 -- common/autotest_common.sh@10 -- # set +x 00:04:43.630 ************************************ 00:04:43.630 START TEST alias_rpc 00:04:43.630 ************************************ 00:04:43.630 10:22:44 alias_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:43.630 * Looking for test storage... 00:04:43.630 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:43.630 10:22:44 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:43.630 10:22:44 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:43.630 10:22:44 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:43.630 10:22:44 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:43.630 10:22:44 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:43.630 10:22:44 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:43.630 10:22:44 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:43.630 10:22:44 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:43.630 10:22:44 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:43.630 10:22:44 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:43.630 10:22:44 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:43.630 10:22:44 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:43.630 10:22:44 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:43.630 10:22:44 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:43.630 10:22:44 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:43.630 10:22:44 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:43.630 10:22:44 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:43.630 10:22:44 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:43.630 10:22:44 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:43.630 10:22:44 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:43.630 10:22:44 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:43.630 10:22:44 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:43.630 10:22:44 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:43.630 10:22:44 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:43.630 10:22:44 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:43.630 10:22:44 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:43.630 10:22:44 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:43.630 10:22:44 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:43.630 10:22:44 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:43.630 10:22:44 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:43.630 10:22:44 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:43.630 10:22:44 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:43.630 10:22:44 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:43.630 10:22:44 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:43.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.630 --rc genhtml_branch_coverage=1 00:04:43.630 --rc genhtml_function_coverage=1 00:04:43.630 --rc genhtml_legend=1 00:04:43.630 --rc geninfo_all_blocks=1 00:04:43.630 --rc geninfo_unexecuted_blocks=1 00:04:43.630 00:04:43.630 ' 00:04:43.630 10:22:44 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:43.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.630 --rc genhtml_branch_coverage=1 00:04:43.630 --rc genhtml_function_coverage=1 00:04:43.631 --rc genhtml_legend=1 00:04:43.631 --rc geninfo_all_blocks=1 00:04:43.631 --rc geninfo_unexecuted_blocks=1 00:04:43.631 00:04:43.631 ' 00:04:43.631 10:22:44 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:43.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.631 --rc genhtml_branch_coverage=1 00:04:43.631 --rc genhtml_function_coverage=1 00:04:43.631 --rc genhtml_legend=1 00:04:43.631 --rc geninfo_all_blocks=1 00:04:43.631 --rc geninfo_unexecuted_blocks=1 00:04:43.631 00:04:43.631 ' 00:04:43.631 10:22:44 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:43.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.631 --rc genhtml_branch_coverage=1 00:04:43.631 --rc genhtml_function_coverage=1 00:04:43.631 --rc genhtml_legend=1 00:04:43.631 --rc geninfo_all_blocks=1 00:04:43.631 --rc geninfo_unexecuted_blocks=1 00:04:43.631 00:04:43.631 ' 00:04:43.631 10:22:44 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:43.631 10:22:44 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57701 00:04:43.631 10:22:44 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:43.631 10:22:44 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57701 00:04:43.631 10:22:44 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 57701 ']' 00:04:43.631 10:22:44 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.631 10:22:44 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:43.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.631 10:22:44 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.631 10:22:44 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:43.631 10:22:44 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.890 [2024-11-15 10:22:44.543855] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:04:43.890 [2024-11-15 10:22:44.544018] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57701 ] 00:04:43.890 [2024-11-15 10:22:44.703518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.148 [2024-11-15 10:22:44.770269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.148 [2024-11-15 10:22:44.842007] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:44.714 10:22:45 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:44.714 10:22:45 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:04:44.714 10:22:45 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:45.282 10:22:45 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57701 00:04:45.282 10:22:45 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 57701 ']' 00:04:45.282 10:22:45 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 57701 00:04:45.282 10:22:45 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:04:45.282 10:22:45 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:45.282 10:22:45 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57701 00:04:45.282 10:22:45 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:45.282 killing process with pid 57701 00:04:45.282 10:22:45 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:45.282 10:22:45 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57701' 00:04:45.282 10:22:45 alias_rpc -- common/autotest_common.sh@971 -- # kill 57701 00:04:45.282 10:22:45 alias_rpc -- common/autotest_common.sh@976 -- # wait 57701 00:04:45.540 00:04:45.540 real 0m1.975s 00:04:45.540 user 0m2.296s 00:04:45.540 sys 0m0.446s 00:04:45.540 ************************************ 00:04:45.540 END TEST alias_rpc 00:04:45.540 ************************************ 00:04:45.540 10:22:46 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:45.540 10:22:46 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.540 10:22:46 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:45.540 10:22:46 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:45.540 10:22:46 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:45.540 10:22:46 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:45.540 10:22:46 -- common/autotest_common.sh@10 -- # set +x 00:04:45.540 ************************************ 00:04:45.540 START TEST spdkcli_tcp 00:04:45.540 ************************************ 00:04:45.540 10:22:46 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:45.540 * Looking for test storage... 00:04:45.540 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:45.540 10:22:46 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:45.540 10:22:46 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:45.540 10:22:46 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:04:45.848 10:22:46 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:45.848 10:22:46 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:45.848 10:22:46 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:45.848 10:22:46 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:45.848 10:22:46 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:45.848 10:22:46 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:45.848 10:22:46 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:45.848 10:22:46 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:45.848 10:22:46 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:45.848 10:22:46 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:45.848 10:22:46 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:45.848 10:22:46 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:45.848 10:22:46 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:45.848 10:22:46 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:45.848 10:22:46 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:45.848 10:22:46 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:45.848 10:22:46 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:45.848 10:22:46 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:45.848 10:22:46 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:45.848 10:22:46 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:45.848 10:22:46 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:45.848 10:22:46 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:45.848 10:22:46 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:45.848 10:22:46 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:45.848 10:22:46 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:45.848 10:22:46 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:45.848 10:22:46 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:45.849 10:22:46 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:45.849 10:22:46 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:45.849 10:22:46 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:45.849 10:22:46 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:45.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.849 --rc genhtml_branch_coverage=1 00:04:45.849 --rc genhtml_function_coverage=1 00:04:45.849 --rc genhtml_legend=1 00:04:45.849 --rc geninfo_all_blocks=1 00:04:45.849 --rc geninfo_unexecuted_blocks=1 00:04:45.849 00:04:45.849 ' 00:04:45.849 10:22:46 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:45.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.849 --rc genhtml_branch_coverage=1 00:04:45.849 --rc genhtml_function_coverage=1 00:04:45.849 --rc genhtml_legend=1 00:04:45.849 --rc geninfo_all_blocks=1 00:04:45.849 --rc geninfo_unexecuted_blocks=1 00:04:45.849 00:04:45.849 ' 00:04:45.849 10:22:46 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:45.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.849 --rc genhtml_branch_coverage=1 00:04:45.849 --rc genhtml_function_coverage=1 00:04:45.849 --rc genhtml_legend=1 00:04:45.849 --rc geninfo_all_blocks=1 00:04:45.849 --rc geninfo_unexecuted_blocks=1 00:04:45.849 00:04:45.849 ' 00:04:45.849 10:22:46 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:45.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.849 --rc genhtml_branch_coverage=1 00:04:45.849 --rc genhtml_function_coverage=1 00:04:45.849 --rc genhtml_legend=1 00:04:45.849 --rc geninfo_all_blocks=1 00:04:45.849 --rc geninfo_unexecuted_blocks=1 00:04:45.849 00:04:45.849 ' 00:04:45.849 10:22:46 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:45.849 10:22:46 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:45.849 10:22:46 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:45.849 10:22:46 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:45.849 10:22:46 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:45.849 10:22:46 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:45.849 10:22:46 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:45.849 10:22:46 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:45.849 10:22:46 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:45.849 10:22:46 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57785 00:04:45.849 10:22:46 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:45.849 10:22:46 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57785 00:04:45.849 10:22:46 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 57785 ']' 00:04:45.849 10:22:46 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:45.849 10:22:46 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:45.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:45.849 10:22:46 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:45.849 10:22:46 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:45.849 10:22:46 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:45.849 [2024-11-15 10:22:46.546684] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:04:45.849 [2024-11-15 10:22:46.547320] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57785 ] 00:04:45.849 [2024-11-15 10:22:46.696315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:46.107 [2024-11-15 10:22:46.760360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:46.107 [2024-11-15 10:22:46.760369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.107 [2024-11-15 10:22:46.831524] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:47.044 10:22:47 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:47.044 10:22:47 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:04:47.044 10:22:47 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57808 00:04:47.044 10:22:47 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:47.044 10:22:47 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:47.044 [ 00:04:47.044 "bdev_malloc_delete", 00:04:47.044 "bdev_malloc_create", 00:04:47.044 "bdev_null_resize", 00:04:47.044 "bdev_null_delete", 00:04:47.044 "bdev_null_create", 00:04:47.044 "bdev_nvme_cuse_unregister", 00:04:47.044 "bdev_nvme_cuse_register", 00:04:47.044 "bdev_opal_new_user", 00:04:47.044 "bdev_opal_set_lock_state", 00:04:47.044 "bdev_opal_delete", 00:04:47.044 "bdev_opal_get_info", 00:04:47.044 "bdev_opal_create", 00:04:47.044 "bdev_nvme_opal_revert", 00:04:47.044 "bdev_nvme_opal_init", 00:04:47.044 "bdev_nvme_send_cmd", 00:04:47.044 "bdev_nvme_set_keys", 00:04:47.044 "bdev_nvme_get_path_iostat", 00:04:47.044 "bdev_nvme_get_mdns_discovery_info", 00:04:47.044 "bdev_nvme_stop_mdns_discovery", 00:04:47.044 "bdev_nvme_start_mdns_discovery", 00:04:47.044 "bdev_nvme_set_multipath_policy", 00:04:47.044 "bdev_nvme_set_preferred_path", 00:04:47.044 "bdev_nvme_get_io_paths", 00:04:47.044 "bdev_nvme_remove_error_injection", 00:04:47.044 "bdev_nvme_add_error_injection", 00:04:47.044 "bdev_nvme_get_discovery_info", 00:04:47.044 "bdev_nvme_stop_discovery", 00:04:47.044 "bdev_nvme_start_discovery", 00:04:47.044 "bdev_nvme_get_controller_health_info", 00:04:47.044 "bdev_nvme_disable_controller", 00:04:47.044 "bdev_nvme_enable_controller", 00:04:47.044 "bdev_nvme_reset_controller", 00:04:47.044 "bdev_nvme_get_transport_statistics", 00:04:47.044 "bdev_nvme_apply_firmware", 00:04:47.044 "bdev_nvme_detach_controller", 00:04:47.044 "bdev_nvme_get_controllers", 00:04:47.044 "bdev_nvme_attach_controller", 00:04:47.044 "bdev_nvme_set_hotplug", 00:04:47.044 "bdev_nvme_set_options", 00:04:47.044 "bdev_passthru_delete", 00:04:47.044 "bdev_passthru_create", 00:04:47.044 "bdev_lvol_set_parent_bdev", 00:04:47.044 "bdev_lvol_set_parent", 00:04:47.044 "bdev_lvol_check_shallow_copy", 00:04:47.044 "bdev_lvol_start_shallow_copy", 00:04:47.044 "bdev_lvol_grow_lvstore", 00:04:47.044 "bdev_lvol_get_lvols", 00:04:47.044 "bdev_lvol_get_lvstores", 00:04:47.044 "bdev_lvol_delete", 00:04:47.044 "bdev_lvol_set_read_only", 00:04:47.044 "bdev_lvol_resize", 00:04:47.044 "bdev_lvol_decouple_parent", 00:04:47.044 "bdev_lvol_inflate", 00:04:47.044 "bdev_lvol_rename", 00:04:47.044 "bdev_lvol_clone_bdev", 00:04:47.044 "bdev_lvol_clone", 00:04:47.044 "bdev_lvol_snapshot", 00:04:47.044 "bdev_lvol_create", 00:04:47.044 "bdev_lvol_delete_lvstore", 00:04:47.044 "bdev_lvol_rename_lvstore", 00:04:47.044 "bdev_lvol_create_lvstore", 00:04:47.044 "bdev_raid_set_options", 00:04:47.044 "bdev_raid_remove_base_bdev", 00:04:47.044 "bdev_raid_add_base_bdev", 00:04:47.044 "bdev_raid_delete", 00:04:47.044 "bdev_raid_create", 00:04:47.044 "bdev_raid_get_bdevs", 00:04:47.044 "bdev_error_inject_error", 00:04:47.044 "bdev_error_delete", 00:04:47.044 "bdev_error_create", 00:04:47.044 "bdev_split_delete", 00:04:47.044 "bdev_split_create", 00:04:47.044 "bdev_delay_delete", 00:04:47.044 "bdev_delay_create", 00:04:47.044 "bdev_delay_update_latency", 00:04:47.044 "bdev_zone_block_delete", 00:04:47.044 "bdev_zone_block_create", 00:04:47.044 "blobfs_create", 00:04:47.044 "blobfs_detect", 00:04:47.044 "blobfs_set_cache_size", 00:04:47.044 "bdev_aio_delete", 00:04:47.044 "bdev_aio_rescan", 00:04:47.044 "bdev_aio_create", 00:04:47.044 "bdev_ftl_set_property", 00:04:47.044 "bdev_ftl_get_properties", 00:04:47.044 "bdev_ftl_get_stats", 00:04:47.044 "bdev_ftl_unmap", 00:04:47.044 "bdev_ftl_unload", 00:04:47.044 "bdev_ftl_delete", 00:04:47.044 "bdev_ftl_load", 00:04:47.044 "bdev_ftl_create", 00:04:47.044 "bdev_virtio_attach_controller", 00:04:47.044 "bdev_virtio_scsi_get_devices", 00:04:47.044 "bdev_virtio_detach_controller", 00:04:47.044 "bdev_virtio_blk_set_hotplug", 00:04:47.044 "bdev_iscsi_delete", 00:04:47.044 "bdev_iscsi_create", 00:04:47.044 "bdev_iscsi_set_options", 00:04:47.044 "bdev_uring_delete", 00:04:47.044 "bdev_uring_rescan", 00:04:47.044 "bdev_uring_create", 00:04:47.044 "accel_error_inject_error", 00:04:47.044 "ioat_scan_accel_module", 00:04:47.044 "dsa_scan_accel_module", 00:04:47.044 "iaa_scan_accel_module", 00:04:47.044 "keyring_file_remove_key", 00:04:47.044 "keyring_file_add_key", 00:04:47.044 "keyring_linux_set_options", 00:04:47.044 "fsdev_aio_delete", 00:04:47.044 "fsdev_aio_create", 00:04:47.044 "iscsi_get_histogram", 00:04:47.044 "iscsi_enable_histogram", 00:04:47.044 "iscsi_set_options", 00:04:47.044 "iscsi_get_auth_groups", 00:04:47.044 "iscsi_auth_group_remove_secret", 00:04:47.044 "iscsi_auth_group_add_secret", 00:04:47.044 "iscsi_delete_auth_group", 00:04:47.044 "iscsi_create_auth_group", 00:04:47.044 "iscsi_set_discovery_auth", 00:04:47.044 "iscsi_get_options", 00:04:47.044 "iscsi_target_node_request_logout", 00:04:47.044 "iscsi_target_node_set_redirect", 00:04:47.044 "iscsi_target_node_set_auth", 00:04:47.044 "iscsi_target_node_add_lun", 00:04:47.044 "iscsi_get_stats", 00:04:47.044 "iscsi_get_connections", 00:04:47.044 "iscsi_portal_group_set_auth", 00:04:47.044 "iscsi_start_portal_group", 00:04:47.044 "iscsi_delete_portal_group", 00:04:47.044 "iscsi_create_portal_group", 00:04:47.044 "iscsi_get_portal_groups", 00:04:47.044 "iscsi_delete_target_node", 00:04:47.044 "iscsi_target_node_remove_pg_ig_maps", 00:04:47.044 "iscsi_target_node_add_pg_ig_maps", 00:04:47.044 "iscsi_create_target_node", 00:04:47.044 "iscsi_get_target_nodes", 00:04:47.045 "iscsi_delete_initiator_group", 00:04:47.045 "iscsi_initiator_group_remove_initiators", 00:04:47.045 "iscsi_initiator_group_add_initiators", 00:04:47.045 "iscsi_create_initiator_group", 00:04:47.045 "iscsi_get_initiator_groups", 00:04:47.045 "nvmf_set_crdt", 00:04:47.045 "nvmf_set_config", 00:04:47.045 "nvmf_set_max_subsystems", 00:04:47.045 "nvmf_stop_mdns_prr", 00:04:47.045 "nvmf_publish_mdns_prr", 00:04:47.045 "nvmf_subsystem_get_listeners", 00:04:47.045 "nvmf_subsystem_get_qpairs", 00:04:47.045 "nvmf_subsystem_get_controllers", 00:04:47.045 "nvmf_get_stats", 00:04:47.045 "nvmf_get_transports", 00:04:47.045 "nvmf_create_transport", 00:04:47.045 "nvmf_get_targets", 00:04:47.045 "nvmf_delete_target", 00:04:47.045 "nvmf_create_target", 00:04:47.045 "nvmf_subsystem_allow_any_host", 00:04:47.045 "nvmf_subsystem_set_keys", 00:04:47.045 "nvmf_subsystem_remove_host", 00:04:47.045 "nvmf_subsystem_add_host", 00:04:47.045 "nvmf_ns_remove_host", 00:04:47.045 "nvmf_ns_add_host", 00:04:47.045 "nvmf_subsystem_remove_ns", 00:04:47.045 "nvmf_subsystem_set_ns_ana_group", 00:04:47.045 "nvmf_subsystem_add_ns", 00:04:47.045 "nvmf_subsystem_listener_set_ana_state", 00:04:47.045 "nvmf_discovery_get_referrals", 00:04:47.045 "nvmf_discovery_remove_referral", 00:04:47.045 "nvmf_discovery_add_referral", 00:04:47.045 "nvmf_subsystem_remove_listener", 00:04:47.045 "nvmf_subsystem_add_listener", 00:04:47.045 "nvmf_delete_subsystem", 00:04:47.045 "nvmf_create_subsystem", 00:04:47.045 "nvmf_get_subsystems", 00:04:47.045 "env_dpdk_get_mem_stats", 00:04:47.045 "nbd_get_disks", 00:04:47.045 "nbd_stop_disk", 00:04:47.045 "nbd_start_disk", 00:04:47.045 "ublk_recover_disk", 00:04:47.045 "ublk_get_disks", 00:04:47.045 "ublk_stop_disk", 00:04:47.045 "ublk_start_disk", 00:04:47.045 "ublk_destroy_target", 00:04:47.045 "ublk_create_target", 00:04:47.045 "virtio_blk_create_transport", 00:04:47.045 "virtio_blk_get_transports", 00:04:47.045 "vhost_controller_set_coalescing", 00:04:47.045 "vhost_get_controllers", 00:04:47.045 "vhost_delete_controller", 00:04:47.045 "vhost_create_blk_controller", 00:04:47.045 "vhost_scsi_controller_remove_target", 00:04:47.045 "vhost_scsi_controller_add_target", 00:04:47.045 "vhost_start_scsi_controller", 00:04:47.045 "vhost_create_scsi_controller", 00:04:47.045 "thread_set_cpumask", 00:04:47.045 "scheduler_set_options", 00:04:47.045 "framework_get_governor", 00:04:47.045 "framework_get_scheduler", 00:04:47.045 "framework_set_scheduler", 00:04:47.045 "framework_get_reactors", 00:04:47.045 "thread_get_io_channels", 00:04:47.045 "thread_get_pollers", 00:04:47.045 "thread_get_stats", 00:04:47.045 "framework_monitor_context_switch", 00:04:47.045 "spdk_kill_instance", 00:04:47.045 "log_enable_timestamps", 00:04:47.045 "log_get_flags", 00:04:47.045 "log_clear_flag", 00:04:47.045 "log_set_flag", 00:04:47.045 "log_get_level", 00:04:47.045 "log_set_level", 00:04:47.045 "log_get_print_level", 00:04:47.045 "log_set_print_level", 00:04:47.045 "framework_enable_cpumask_locks", 00:04:47.045 "framework_disable_cpumask_locks", 00:04:47.045 "framework_wait_init", 00:04:47.045 "framework_start_init", 00:04:47.045 "scsi_get_devices", 00:04:47.045 "bdev_get_histogram", 00:04:47.045 "bdev_enable_histogram", 00:04:47.045 "bdev_set_qos_limit", 00:04:47.045 "bdev_set_qd_sampling_period", 00:04:47.045 "bdev_get_bdevs", 00:04:47.045 "bdev_reset_iostat", 00:04:47.045 "bdev_get_iostat", 00:04:47.045 "bdev_examine", 00:04:47.045 "bdev_wait_for_examine", 00:04:47.045 "bdev_set_options", 00:04:47.045 "accel_get_stats", 00:04:47.045 "accel_set_options", 00:04:47.045 "accel_set_driver", 00:04:47.045 "accel_crypto_key_destroy", 00:04:47.045 "accel_crypto_keys_get", 00:04:47.045 "accel_crypto_key_create", 00:04:47.045 "accel_assign_opc", 00:04:47.045 "accel_get_module_info", 00:04:47.045 "accel_get_opc_assignments", 00:04:47.045 "vmd_rescan", 00:04:47.045 "vmd_remove_device", 00:04:47.045 "vmd_enable", 00:04:47.045 "sock_get_default_impl", 00:04:47.045 "sock_set_default_impl", 00:04:47.045 "sock_impl_set_options", 00:04:47.045 "sock_impl_get_options", 00:04:47.045 "iobuf_get_stats", 00:04:47.045 "iobuf_set_options", 00:04:47.045 "keyring_get_keys", 00:04:47.045 "framework_get_pci_devices", 00:04:47.045 "framework_get_config", 00:04:47.045 "framework_get_subsystems", 00:04:47.045 "fsdev_set_opts", 00:04:47.045 "fsdev_get_opts", 00:04:47.045 "trace_get_info", 00:04:47.045 "trace_get_tpoint_group_mask", 00:04:47.045 "trace_disable_tpoint_group", 00:04:47.045 "trace_enable_tpoint_group", 00:04:47.045 "trace_clear_tpoint_mask", 00:04:47.045 "trace_set_tpoint_mask", 00:04:47.045 "notify_get_notifications", 00:04:47.045 "notify_get_types", 00:04:47.045 "spdk_get_version", 00:04:47.045 "rpc_get_methods" 00:04:47.045 ] 00:04:47.045 10:22:47 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:47.045 10:22:47 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:47.045 10:22:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:47.045 10:22:47 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:47.045 10:22:47 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57785 00:04:47.045 10:22:47 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 57785 ']' 00:04:47.045 10:22:47 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 57785 00:04:47.045 10:22:47 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:04:47.045 10:22:47 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:47.045 10:22:47 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57785 00:04:47.305 10:22:47 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:47.305 killing process with pid 57785 00:04:47.305 10:22:47 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:47.305 10:22:47 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57785' 00:04:47.305 10:22:47 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 57785 00:04:47.305 10:22:47 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 57785 00:04:47.564 00:04:47.564 real 0m2.005s 00:04:47.564 user 0m3.768s 00:04:47.565 sys 0m0.498s 00:04:47.565 10:22:48 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:47.565 10:22:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:47.565 ************************************ 00:04:47.565 END TEST spdkcli_tcp 00:04:47.565 ************************************ 00:04:47.565 10:22:48 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:47.565 10:22:48 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:47.565 10:22:48 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:47.565 10:22:48 -- common/autotest_common.sh@10 -- # set +x 00:04:47.565 ************************************ 00:04:47.565 START TEST dpdk_mem_utility 00:04:47.565 ************************************ 00:04:47.565 10:22:48 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:47.565 * Looking for test storage... 00:04:47.565 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:47.565 10:22:48 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:47.565 10:22:48 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:04:47.565 10:22:48 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:47.824 10:22:48 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:47.825 10:22:48 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:47.825 10:22:48 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:47.825 10:22:48 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:47.825 10:22:48 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:47.825 10:22:48 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:47.825 10:22:48 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:47.825 10:22:48 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:47.825 10:22:48 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:47.825 10:22:48 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:47.825 10:22:48 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:47.825 10:22:48 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:47.825 10:22:48 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:47.825 10:22:48 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:47.825 10:22:48 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:47.825 10:22:48 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:47.825 10:22:48 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:47.825 10:22:48 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:47.825 10:22:48 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:47.825 10:22:48 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:47.825 10:22:48 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:47.825 10:22:48 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:47.825 10:22:48 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:47.825 10:22:48 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:47.825 10:22:48 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:47.825 10:22:48 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:47.825 10:22:48 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:47.825 10:22:48 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:47.825 10:22:48 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:47.825 10:22:48 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:47.825 10:22:48 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:47.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.825 --rc genhtml_branch_coverage=1 00:04:47.825 --rc genhtml_function_coverage=1 00:04:47.825 --rc genhtml_legend=1 00:04:47.825 --rc geninfo_all_blocks=1 00:04:47.825 --rc geninfo_unexecuted_blocks=1 00:04:47.825 00:04:47.825 ' 00:04:47.825 10:22:48 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:47.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.825 --rc genhtml_branch_coverage=1 00:04:47.825 --rc genhtml_function_coverage=1 00:04:47.825 --rc genhtml_legend=1 00:04:47.825 --rc geninfo_all_blocks=1 00:04:47.825 --rc geninfo_unexecuted_blocks=1 00:04:47.825 00:04:47.825 ' 00:04:47.825 10:22:48 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:47.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.825 --rc genhtml_branch_coverage=1 00:04:47.825 --rc genhtml_function_coverage=1 00:04:47.825 --rc genhtml_legend=1 00:04:47.825 --rc geninfo_all_blocks=1 00:04:47.825 --rc geninfo_unexecuted_blocks=1 00:04:47.825 00:04:47.825 ' 00:04:47.825 10:22:48 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:47.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.825 --rc genhtml_branch_coverage=1 00:04:47.825 --rc genhtml_function_coverage=1 00:04:47.825 --rc genhtml_legend=1 00:04:47.825 --rc geninfo_all_blocks=1 00:04:47.825 --rc geninfo_unexecuted_blocks=1 00:04:47.825 00:04:47.825 ' 00:04:47.825 10:22:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:47.825 10:22:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57884 00:04:47.825 10:22:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:47.825 10:22:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57884 00:04:47.825 10:22:48 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 57884 ']' 00:04:47.825 10:22:48 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.825 10:22:48 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:47.825 10:22:48 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.825 10:22:48 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:47.825 10:22:48 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:47.825 [2024-11-15 10:22:48.612483] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:04:47.825 [2024-11-15 10:22:48.612602] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57884 ] 00:04:48.084 [2024-11-15 10:22:48.754164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.084 [2024-11-15 10:22:48.814081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.084 [2024-11-15 10:22:48.886537] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:49.027 10:22:49 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:49.027 10:22:49 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:04:49.027 10:22:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:49.027 10:22:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:49.027 10:22:49 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.027 10:22:49 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:49.027 { 00:04:49.027 "filename": "/tmp/spdk_mem_dump.txt" 00:04:49.027 } 00:04:49.027 10:22:49 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.027 10:22:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:49.027 DPDK memory size 818.000000 MiB in 1 heap(s) 00:04:49.027 1 heaps totaling size 818.000000 MiB 00:04:49.027 size: 818.000000 MiB heap id: 0 00:04:49.027 end heaps---------- 00:04:49.027 9 mempools totaling size 603.782043 MiB 00:04:49.027 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:49.027 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:49.027 size: 100.555481 MiB name: bdev_io_57884 00:04:49.027 size: 50.003479 MiB name: msgpool_57884 00:04:49.027 size: 36.509338 MiB name: fsdev_io_57884 00:04:49.027 size: 21.763794 MiB name: PDU_Pool 00:04:49.027 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:49.027 size: 4.133484 MiB name: evtpool_57884 00:04:49.027 size: 0.026123 MiB name: Session_Pool 00:04:49.027 end mempools------- 00:04:49.027 6 memzones totaling size 4.142822 MiB 00:04:49.027 size: 1.000366 MiB name: RG_ring_0_57884 00:04:49.027 size: 1.000366 MiB name: RG_ring_1_57884 00:04:49.027 size: 1.000366 MiB name: RG_ring_4_57884 00:04:49.027 size: 1.000366 MiB name: RG_ring_5_57884 00:04:49.027 size: 0.125366 MiB name: RG_ring_2_57884 00:04:49.027 size: 0.015991 MiB name: RG_ring_3_57884 00:04:49.027 end memzones------- 00:04:49.027 10:22:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:49.027 heap id: 0 total size: 818.000000 MiB number of busy elements: 319 number of free elements: 15 00:04:49.027 list of free elements. size: 10.802124 MiB 00:04:49.027 element at address: 0x200019200000 with size: 0.999878 MiB 00:04:49.027 element at address: 0x200019400000 with size: 0.999878 MiB 00:04:49.027 element at address: 0x200032000000 with size: 0.994446 MiB 00:04:49.027 element at address: 0x200000400000 with size: 0.993958 MiB 00:04:49.027 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:49.027 element at address: 0x200012c00000 with size: 0.944275 MiB 00:04:49.027 element at address: 0x200019600000 with size: 0.936584 MiB 00:04:49.027 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:49.027 element at address: 0x20001ae00000 with size: 0.567322 MiB 00:04:49.027 element at address: 0x20000a600000 with size: 0.488892 MiB 00:04:49.027 element at address: 0x200000c00000 with size: 0.486267 MiB 00:04:49.027 element at address: 0x200019800000 with size: 0.485657 MiB 00:04:49.027 element at address: 0x200003e00000 with size: 0.480286 MiB 00:04:49.027 element at address: 0x200028200000 with size: 0.395752 MiB 00:04:49.027 element at address: 0x200000800000 with size: 0.351746 MiB 00:04:49.027 list of standard malloc elements. size: 199.268982 MiB 00:04:49.027 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:49.027 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:49.027 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:49.027 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:04:49.027 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:04:49.027 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:49.027 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:04:49.027 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:49.027 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:04:49.027 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:49.027 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:49.027 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:04:49.027 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:04:49.027 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:04:49.027 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:04:49.027 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:04:49.027 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:04:49.027 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:04:49.027 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:04:49.027 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:04:49.027 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:04:49.027 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:04:49.027 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:04:49.027 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:04:49.027 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:04:49.027 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:04:49.027 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:04:49.027 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:04:49.027 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:04:49.027 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:04:49.028 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:04:49.028 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:04:49.028 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:04:49.028 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:04:49.028 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:04:49.028 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:04:49.028 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:49.028 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:49.028 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:04:49.028 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:49.028 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:49.028 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:04:49.028 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:04:49.028 element at address: 0x20000085e580 with size: 0.000183 MiB 00:04:49.028 element at address: 0x20000087e840 with size: 0.000183 MiB 00:04:49.028 element at address: 0x20000087e900 with size: 0.000183 MiB 00:04:49.028 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:04:49.028 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:04:49.028 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:04:49.028 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:04:49.028 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:04:49.028 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:04:49.028 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:04:49.028 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:04:49.028 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:04:49.028 element at address: 0x20000087f080 with size: 0.000183 MiB 00:04:49.028 element at address: 0x20000087f140 with size: 0.000183 MiB 00:04:49.028 element at address: 0x20000087f200 with size: 0.000183 MiB 00:04:49.028 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:04:49.028 element at address: 0x20000087f380 with size: 0.000183 MiB 00:04:49.028 element at address: 0x20000087f440 with size: 0.000183 MiB 00:04:49.028 element at address: 0x20000087f500 with size: 0.000183 MiB 00:04:49.028 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:49.028 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:49.028 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:49.028 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:49.028 element at address: 0x200000c7c7c0 with size: 0.000183 MiB 00:04:49.028 element at address: 0x200000c7c880 with size: 0.000183 MiB 00:04:49.028 element at address: 0x200000c7c940 with size: 0.000183 MiB 00:04:49.028 element at address: 0x200000c7ca00 with size: 0.000183 MiB 00:04:49.028 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:04:49.028 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:04:49.028 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:04:49.028 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:04:49.028 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:04:49.028 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:04:49.028 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:04:49.028 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:04:49.028 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:04:49.028 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:04:49.028 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:04:49.028 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:04:49.028 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:04:49.028 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:04:49.028 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:04:49.028 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:04:49.028 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:04:49.028 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:04:49.028 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:04:49.028 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:04:49.028 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:04:49.028 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:04:49.028 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:04:49.028 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:04:49.028 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:04:49.028 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:04:49.028 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:04:49.028 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:04:49.028 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:04:49.028 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:04:49.028 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:04:49.028 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:04:49.028 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:04:49.028 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:04:49.028 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:04:49.028 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:04:49.028 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:04:49.028 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:04:49.028 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:04:49.028 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:04:49.028 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:04:49.028 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:04:49.028 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:04:49.028 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:04:49.028 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:04:49.028 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:04:49.028 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:49.028 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:49.028 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:49.028 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:04:49.028 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:04:49.028 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:04:49.028 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:04:49.028 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:04:49.028 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:04:49.028 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:04:49.028 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:04:49.028 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:04:49.028 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:49.028 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:49.028 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:49.028 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:49.028 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:04:49.028 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:04:49.028 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:04:49.028 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:04:49.028 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:04:49.028 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:04:49.028 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:04:49.028 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:04:49.028 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:04:49.028 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:04:49.028 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:49.028 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:49.028 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:49.028 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:04:49.028 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:04:49.028 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:04:49.028 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:04:49.028 element at address: 0x20001ae913c0 with size: 0.000183 MiB 00:04:49.028 element at address: 0x20001ae91480 with size: 0.000183 MiB 00:04:49.028 element at address: 0x20001ae91540 with size: 0.000183 MiB 00:04:49.028 element at address: 0x20001ae91600 with size: 0.000183 MiB 00:04:49.028 element at address: 0x20001ae916c0 with size: 0.000183 MiB 00:04:49.028 element at address: 0x20001ae91780 with size: 0.000183 MiB 00:04:49.028 element at address: 0x20001ae91840 with size: 0.000183 MiB 00:04:49.028 element at address: 0x20001ae91900 with size: 0.000183 MiB 00:04:49.028 element at address: 0x20001ae919c0 with size: 0.000183 MiB 00:04:49.028 element at address: 0x20001ae91a80 with size: 0.000183 MiB 00:04:49.028 element at address: 0x20001ae91b40 with size: 0.000183 MiB 00:04:49.028 element at address: 0x20001ae91c00 with size: 0.000183 MiB 00:04:49.028 element at address: 0x20001ae91cc0 with size: 0.000183 MiB 00:04:49.028 element at address: 0x20001ae91d80 with size: 0.000183 MiB 00:04:49.028 element at address: 0x20001ae91e40 with size: 0.000183 MiB 00:04:49.028 element at address: 0x20001ae91f00 with size: 0.000183 MiB 00:04:49.028 element at address: 0x20001ae91fc0 with size: 0.000183 MiB 00:04:49.028 element at address: 0x20001ae92080 with size: 0.000183 MiB 00:04:49.028 element at address: 0x20001ae92140 with size: 0.000183 MiB 00:04:49.028 element at address: 0x20001ae92200 with size: 0.000183 MiB 00:04:49.028 element at address: 0x20001ae922c0 with size: 0.000183 MiB 00:04:49.028 element at address: 0x20001ae92380 with size: 0.000183 MiB 00:04:49.028 element at address: 0x20001ae92440 with size: 0.000183 MiB 00:04:49.028 element at address: 0x20001ae92500 with size: 0.000183 MiB 00:04:49.028 element at address: 0x20001ae925c0 with size: 0.000183 MiB 00:04:49.028 element at address: 0x20001ae92680 with size: 0.000183 MiB 00:04:49.028 element at address: 0x20001ae92740 with size: 0.000183 MiB 00:04:49.028 element at address: 0x20001ae92800 with size: 0.000183 MiB 00:04:49.028 element at address: 0x20001ae928c0 with size: 0.000183 MiB 00:04:49.028 element at address: 0x20001ae92980 with size: 0.000183 MiB 00:04:49.028 element at address: 0x20001ae92a40 with size: 0.000183 MiB 00:04:49.028 element at address: 0x20001ae92b00 with size: 0.000183 MiB 00:04:49.028 element at address: 0x20001ae92bc0 with size: 0.000183 MiB 00:04:49.028 element at address: 0x20001ae92c80 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20001ae92d40 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20001ae92e00 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20001ae92ec0 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20001ae92f80 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20001ae93040 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20001ae93100 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20001ae931c0 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20001ae93280 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20001ae93340 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20001ae93400 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20001ae934c0 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20001ae93580 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20001ae93640 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20001ae93700 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20001ae937c0 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20001ae93880 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20001ae93940 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20001ae93a00 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20001ae93ac0 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20001ae93b80 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20001ae93c40 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20001ae93d00 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20001ae93dc0 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20001ae93e80 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20001ae93f40 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20001ae94000 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20001ae940c0 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20001ae94180 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20001ae94240 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20001ae94300 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20001ae943c0 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20001ae94480 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20001ae94540 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20001ae94600 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20001ae946c0 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20001ae94780 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20001ae94840 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20001ae94900 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20001ae949c0 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20001ae94a80 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20001ae94b40 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20001ae94c00 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20001ae94cc0 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20001ae94d80 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20001ae94e40 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20001ae94f00 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20001ae94fc0 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20001ae95080 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20001ae95140 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20001ae95200 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20001ae952c0 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:04:49.029 element at address: 0x200028265500 with size: 0.000183 MiB 00:04:49.029 element at address: 0x2000282655c0 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826c1c0 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826c3c0 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826c480 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826c540 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826c600 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826c6c0 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826c780 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826c840 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826c900 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826c9c0 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826ca80 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826cb40 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826cc00 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826ccc0 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826cd80 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826ce40 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826cf00 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826cfc0 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826d080 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826d140 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826d200 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826d2c0 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826d380 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826d440 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826d500 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826d5c0 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826d680 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826d740 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826d800 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826d8c0 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826d980 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826da40 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826db00 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826dbc0 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826dc80 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826dd40 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826de00 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826dec0 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826df80 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826e040 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826e100 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826e1c0 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826e280 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826e340 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826e400 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826e4c0 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826e580 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826e640 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826e700 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826e7c0 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826e880 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826e940 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826ea00 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826eac0 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826eb80 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826ec40 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826ed00 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826edc0 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826ee80 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826ef40 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826f000 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826f0c0 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826f180 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826f240 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826f300 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826f3c0 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826f480 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826f540 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826f600 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826f6c0 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826f780 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826f840 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826f900 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826f9c0 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826fa80 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826fb40 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826fc00 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826fcc0 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826fd80 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:04:49.029 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:04:49.029 list of memzone associated elements. size: 607.928894 MiB 00:04:49.029 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:04:49.029 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:49.029 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:04:49.029 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:49.029 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:04:49.029 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_57884_0 00:04:49.029 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:49.029 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57884_0 00:04:49.029 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:49.029 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57884_0 00:04:49.029 element at address: 0x2000199be940 with size: 20.255554 MiB 00:04:49.029 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:49.029 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:04:49.029 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:49.029 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:49.030 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57884_0 00:04:49.030 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:49.030 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57884 00:04:49.030 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:49.030 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57884 00:04:49.030 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:49.030 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:49.030 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:04:49.030 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:49.030 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:49.030 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:49.030 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:49.030 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:49.030 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:49.030 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57884 00:04:49.030 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:49.030 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57884 00:04:49.030 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:04:49.030 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57884 00:04:49.030 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:04:49.030 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57884 00:04:49.030 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:49.030 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57884 00:04:49.030 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:49.030 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57884 00:04:49.030 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:49.030 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:49.030 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:49.030 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:49.030 element at address: 0x20001987c540 with size: 0.250488 MiB 00:04:49.030 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:49.030 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:49.030 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57884 00:04:49.030 element at address: 0x20000085e640 with size: 0.125488 MiB 00:04:49.030 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57884 00:04:49.030 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:49.030 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:49.030 element at address: 0x200028265680 with size: 0.023743 MiB 00:04:49.030 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:49.030 element at address: 0x20000085a380 with size: 0.016113 MiB 00:04:49.030 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57884 00:04:49.030 element at address: 0x20002826b7c0 with size: 0.002441 MiB 00:04:49.030 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:49.030 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:04:49.030 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57884 00:04:49.030 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:49.030 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57884 00:04:49.030 element at address: 0x20000085a180 with size: 0.000305 MiB 00:04:49.030 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57884 00:04:49.030 element at address: 0x20002826c280 with size: 0.000305 MiB 00:04:49.030 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:49.030 10:22:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:49.030 10:22:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57884 00:04:49.030 10:22:49 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 57884 ']' 00:04:49.030 10:22:49 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 57884 00:04:49.030 10:22:49 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:04:49.030 10:22:49 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:49.030 10:22:49 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57884 00:04:49.030 10:22:49 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:49.030 10:22:49 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:49.030 killing process with pid 57884 00:04:49.030 10:22:49 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57884' 00:04:49.030 10:22:49 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 57884 00:04:49.030 10:22:49 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 57884 00:04:49.598 00:04:49.598 real 0m1.866s 00:04:49.598 user 0m2.049s 00:04:49.598 sys 0m0.459s 00:04:49.598 10:22:50 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:49.598 10:22:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:49.598 ************************************ 00:04:49.598 END TEST dpdk_mem_utility 00:04:49.598 ************************************ 00:04:49.599 10:22:50 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:49.599 10:22:50 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:49.599 10:22:50 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:49.599 10:22:50 -- common/autotest_common.sh@10 -- # set +x 00:04:49.599 ************************************ 00:04:49.599 START TEST event 00:04:49.599 ************************************ 00:04:49.599 10:22:50 event -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:49.599 * Looking for test storage... 00:04:49.599 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:49.599 10:22:50 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:49.599 10:22:50 event -- common/autotest_common.sh@1691 -- # lcov --version 00:04:49.599 10:22:50 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:49.599 10:22:50 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:49.599 10:22:50 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:49.599 10:22:50 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:49.599 10:22:50 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:49.599 10:22:50 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:49.599 10:22:50 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:49.599 10:22:50 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:49.599 10:22:50 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:49.599 10:22:50 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:49.599 10:22:50 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:49.599 10:22:50 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:49.599 10:22:50 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:49.599 10:22:50 event -- scripts/common.sh@344 -- # case "$op" in 00:04:49.599 10:22:50 event -- scripts/common.sh@345 -- # : 1 00:04:49.599 10:22:50 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:49.599 10:22:50 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:49.599 10:22:50 event -- scripts/common.sh@365 -- # decimal 1 00:04:49.599 10:22:50 event -- scripts/common.sh@353 -- # local d=1 00:04:49.599 10:22:50 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:49.599 10:22:50 event -- scripts/common.sh@355 -- # echo 1 00:04:49.599 10:22:50 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:49.599 10:22:50 event -- scripts/common.sh@366 -- # decimal 2 00:04:49.599 10:22:50 event -- scripts/common.sh@353 -- # local d=2 00:04:49.599 10:22:50 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:49.599 10:22:50 event -- scripts/common.sh@355 -- # echo 2 00:04:49.599 10:22:50 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:49.599 10:22:50 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:49.599 10:22:50 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:49.599 10:22:50 event -- scripts/common.sh@368 -- # return 0 00:04:49.599 10:22:50 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:49.599 10:22:50 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:49.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.599 --rc genhtml_branch_coverage=1 00:04:49.599 --rc genhtml_function_coverage=1 00:04:49.599 --rc genhtml_legend=1 00:04:49.599 --rc geninfo_all_blocks=1 00:04:49.599 --rc geninfo_unexecuted_blocks=1 00:04:49.599 00:04:49.599 ' 00:04:49.599 10:22:50 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:49.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.599 --rc genhtml_branch_coverage=1 00:04:49.599 --rc genhtml_function_coverage=1 00:04:49.599 --rc genhtml_legend=1 00:04:49.599 --rc geninfo_all_blocks=1 00:04:49.599 --rc geninfo_unexecuted_blocks=1 00:04:49.599 00:04:49.599 ' 00:04:49.599 10:22:50 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:49.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.599 --rc genhtml_branch_coverage=1 00:04:49.599 --rc genhtml_function_coverage=1 00:04:49.599 --rc genhtml_legend=1 00:04:49.599 --rc geninfo_all_blocks=1 00:04:49.599 --rc geninfo_unexecuted_blocks=1 00:04:49.599 00:04:49.599 ' 00:04:49.599 10:22:50 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:49.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.599 --rc genhtml_branch_coverage=1 00:04:49.599 --rc genhtml_function_coverage=1 00:04:49.599 --rc genhtml_legend=1 00:04:49.599 --rc geninfo_all_blocks=1 00:04:49.599 --rc geninfo_unexecuted_blocks=1 00:04:49.599 00:04:49.599 ' 00:04:49.599 10:22:50 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:49.599 10:22:50 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:49.599 10:22:50 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:49.599 10:22:50 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:04:49.599 10:22:50 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:49.599 10:22:50 event -- common/autotest_common.sh@10 -- # set +x 00:04:49.599 ************************************ 00:04:49.599 START TEST event_perf 00:04:49.599 ************************************ 00:04:49.599 10:22:50 event.event_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:49.599 Running I/O for 1 seconds...[2024-11-15 10:22:50.447643] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:04:49.599 [2024-11-15 10:22:50.447727] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57964 ] 00:04:49.858 [2024-11-15 10:22:50.606965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:49.858 [2024-11-15 10:22:50.679746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:49.858 [2024-11-15 10:22:50.679887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:49.858 [2024-11-15 10:22:50.679980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:49.858 [2024-11-15 10:22:50.679990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.230 Running I/O for 1 seconds... 00:04:51.231 lcore 0: 207823 00:04:51.231 lcore 1: 207822 00:04:51.231 lcore 2: 207822 00:04:51.231 lcore 3: 207823 00:04:51.231 done. 00:04:51.231 00:04:51.231 real 0m1.305s 00:04:51.231 user 0m4.120s 00:04:51.231 sys 0m0.051s 00:04:51.231 10:22:51 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:51.231 ************************************ 00:04:51.231 END TEST event_perf 00:04:51.231 10:22:51 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:51.231 ************************************ 00:04:51.231 10:22:51 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:51.231 10:22:51 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:04:51.231 10:22:51 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:51.231 10:22:51 event -- common/autotest_common.sh@10 -- # set +x 00:04:51.231 ************************************ 00:04:51.231 START TEST event_reactor 00:04:51.231 ************************************ 00:04:51.231 10:22:51 event.event_reactor -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:51.231 [2024-11-15 10:22:51.796787] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:04:51.231 [2024-11-15 10:22:51.796897] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58008 ] 00:04:51.231 [2024-11-15 10:22:51.943506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.231 [2024-11-15 10:22:52.003147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.608 test_start 00:04:52.608 oneshot 00:04:52.608 tick 100 00:04:52.608 tick 100 00:04:52.608 tick 250 00:04:52.608 tick 100 00:04:52.608 tick 100 00:04:52.608 tick 100 00:04:52.608 tick 250 00:04:52.608 tick 500 00:04:52.608 tick 100 00:04:52.608 tick 100 00:04:52.608 tick 250 00:04:52.608 tick 100 00:04:52.608 tick 100 00:04:52.608 test_end 00:04:52.608 00:04:52.608 real 0m1.277s 00:04:52.608 user 0m1.123s 00:04:52.608 sys 0m0.047s 00:04:52.608 10:22:53 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:52.608 10:22:53 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:52.608 ************************************ 00:04:52.608 END TEST event_reactor 00:04:52.608 ************************************ 00:04:52.608 10:22:53 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:52.608 10:22:53 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:04:52.608 10:22:53 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:52.608 10:22:53 event -- common/autotest_common.sh@10 -- # set +x 00:04:52.608 ************************************ 00:04:52.608 START TEST event_reactor_perf 00:04:52.608 ************************************ 00:04:52.608 10:22:53 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:52.608 [2024-11-15 10:22:53.123905] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:04:52.608 [2024-11-15 10:22:53.124033] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58038 ] 00:04:52.608 [2024-11-15 10:22:53.271744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.608 [2024-11-15 10:22:53.331743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.543 test_start 00:04:53.543 test_end 00:04:53.543 Performance: 374925 events per second 00:04:53.543 00:04:53.543 real 0m1.279s 00:04:53.543 user 0m1.127s 00:04:53.543 sys 0m0.046s 00:04:53.543 10:22:54 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:53.543 10:22:54 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:53.543 ************************************ 00:04:53.543 END TEST event_reactor_perf 00:04:53.543 ************************************ 00:04:53.801 10:22:54 event -- event/event.sh@49 -- # uname -s 00:04:53.801 10:22:54 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:53.801 10:22:54 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:53.801 10:22:54 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:53.801 10:22:54 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:53.801 10:22:54 event -- common/autotest_common.sh@10 -- # set +x 00:04:53.801 ************************************ 00:04:53.801 START TEST event_scheduler 00:04:53.801 ************************************ 00:04:53.801 10:22:54 event.event_scheduler -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:53.801 * Looking for test storage... 00:04:53.801 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:53.801 10:22:54 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:53.801 10:22:54 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:04:53.801 10:22:54 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:53.801 10:22:54 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:53.801 10:22:54 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:53.801 10:22:54 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:53.802 10:22:54 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:53.802 10:22:54 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.802 10:22:54 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:53.802 10:22:54 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:53.802 10:22:54 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:53.802 10:22:54 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:53.802 10:22:54 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:53.802 10:22:54 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:53.802 10:22:54 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:53.802 10:22:54 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:53.802 10:22:54 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:53.802 10:22:54 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:53.802 10:22:54 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.802 10:22:54 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:53.802 10:22:54 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:53.802 10:22:54 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.802 10:22:54 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:53.802 10:22:54 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:53.802 10:22:54 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:53.802 10:22:54 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:53.802 10:22:54 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.802 10:22:54 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:53.802 10:22:54 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:53.802 10:22:54 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:53.802 10:22:54 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:53.802 10:22:54 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:53.802 10:22:54 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.802 10:22:54 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:53.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.802 --rc genhtml_branch_coverage=1 00:04:53.802 --rc genhtml_function_coverage=1 00:04:53.802 --rc genhtml_legend=1 00:04:53.802 --rc geninfo_all_blocks=1 00:04:53.802 --rc geninfo_unexecuted_blocks=1 00:04:53.802 00:04:53.802 ' 00:04:53.802 10:22:54 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:53.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.802 --rc genhtml_branch_coverage=1 00:04:53.802 --rc genhtml_function_coverage=1 00:04:53.802 --rc genhtml_legend=1 00:04:53.802 --rc geninfo_all_blocks=1 00:04:53.802 --rc geninfo_unexecuted_blocks=1 00:04:53.802 00:04:53.802 ' 00:04:53.802 10:22:54 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:53.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.802 --rc genhtml_branch_coverage=1 00:04:53.802 --rc genhtml_function_coverage=1 00:04:53.802 --rc genhtml_legend=1 00:04:53.802 --rc geninfo_all_blocks=1 00:04:53.802 --rc geninfo_unexecuted_blocks=1 00:04:53.802 00:04:53.802 ' 00:04:53.802 10:22:54 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:53.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.802 --rc genhtml_branch_coverage=1 00:04:53.802 --rc genhtml_function_coverage=1 00:04:53.802 --rc genhtml_legend=1 00:04:53.802 --rc geninfo_all_blocks=1 00:04:53.802 --rc geninfo_unexecuted_blocks=1 00:04:53.802 00:04:53.802 ' 00:04:53.802 10:22:54 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:53.802 10:22:54 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58107 00:04:53.802 10:22:54 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:53.802 10:22:54 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:53.802 10:22:54 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58107 00:04:53.802 10:22:54 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 58107 ']' 00:04:53.802 10:22:54 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.802 10:22:54 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:53.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.802 10:22:54 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.802 10:22:54 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:53.802 10:22:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:54.059 [2024-11-15 10:22:54.695769] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:04:54.059 [2024-11-15 10:22:54.695895] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58107 ] 00:04:54.059 [2024-11-15 10:22:54.849390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:54.317 [2024-11-15 10:22:54.920414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.318 [2024-11-15 10:22:54.920454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:54.318 [2024-11-15 10:22:54.920607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:54.318 [2024-11-15 10:22:54.920609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:54.957 10:22:55 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:54.957 10:22:55 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:04:54.957 10:22:55 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:54.957 10:22:55 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:54.957 10:22:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:54.957 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:54.957 POWER: Cannot set governor of lcore 0 to userspace 00:04:54.957 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:54.957 POWER: Cannot set governor of lcore 0 to performance 00:04:54.957 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:54.957 POWER: Cannot set governor of lcore 0 to userspace 00:04:54.957 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:54.957 POWER: Cannot set governor of lcore 0 to userspace 00:04:54.957 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:04:54.957 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:54.957 POWER: Unable to set Power Management Environment for lcore 0 00:04:54.957 [2024-11-15 10:22:55.745981] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:04:54.957 [2024-11-15 10:22:55.745995] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:04:54.957 [2024-11-15 10:22:55.746005] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:54.957 [2024-11-15 10:22:55.746016] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:54.957 [2024-11-15 10:22:55.746024] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:54.957 [2024-11-15 10:22:55.746031] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:54.957 10:22:55 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:54.957 10:22:55 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:54.957 10:22:55 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:54.957 10:22:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:55.215 [2024-11-15 10:22:55.810769] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:55.216 [2024-11-15 10:22:55.848540] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:55.216 10:22:55 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:55.216 10:22:55 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:55.216 10:22:55 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:55.216 10:22:55 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:55.216 10:22:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:55.216 ************************************ 00:04:55.216 START TEST scheduler_create_thread 00:04:55.216 ************************************ 00:04:55.216 10:22:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:04:55.216 10:22:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:55.216 10:22:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:55.216 10:22:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:55.216 2 00:04:55.216 10:22:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:55.216 10:22:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:55.216 10:22:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:55.216 10:22:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:55.216 3 00:04:55.216 10:22:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:55.216 10:22:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:55.216 10:22:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:55.216 10:22:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:55.216 4 00:04:55.216 10:22:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:55.216 10:22:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:55.216 10:22:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:55.216 10:22:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:55.216 5 00:04:55.216 10:22:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:55.216 10:22:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:55.216 10:22:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:55.216 10:22:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:55.216 6 00:04:55.216 10:22:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:55.216 10:22:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:55.216 10:22:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:55.216 10:22:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:55.216 7 00:04:55.216 10:22:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:55.216 10:22:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:55.216 10:22:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:55.216 10:22:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:55.216 8 00:04:55.216 10:22:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:55.216 10:22:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:55.216 10:22:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:55.216 10:22:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:55.216 9 00:04:55.216 10:22:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:55.216 10:22:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:55.216 10:22:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:55.216 10:22:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:55.216 10 00:04:55.216 10:22:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:55.216 10:22:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:55.216 10:22:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:55.216 10:22:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:55.216 10:22:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:55.216 10:22:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:55.216 10:22:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:55.216 10:22:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:55.216 10:22:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:55.216 10:22:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:55.216 10:22:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:55.216 10:22:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:55.216 10:22:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:55.782 10:22:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:55.782 10:22:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:55.782 10:22:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:55.782 10:22:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:55.782 10:22:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:57.158 ************************************ 00:04:57.158 END TEST scheduler_create_thread 00:04:57.158 ************************************ 00:04:57.158 10:22:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:57.158 00:04:57.158 real 0m1.749s 00:04:57.158 user 0m0.021s 00:04:57.158 sys 0m0.005s 00:04:57.158 10:22:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:57.158 10:22:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:57.158 10:22:57 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:57.158 10:22:57 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58107 00:04:57.158 10:22:57 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 58107 ']' 00:04:57.158 10:22:57 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 58107 00:04:57.158 10:22:57 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:04:57.158 10:22:57 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:57.158 10:22:57 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58107 00:04:57.158 killing process with pid 58107 00:04:57.158 10:22:57 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:04:57.158 10:22:57 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:04:57.158 10:22:57 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58107' 00:04:57.158 10:22:57 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 58107 00:04:57.158 10:22:57 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 58107 00:04:57.415 [2024-11-15 10:22:58.091330] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:57.674 ************************************ 00:04:57.674 END TEST event_scheduler 00:04:57.674 ************************************ 00:04:57.674 00:04:57.674 real 0m3.841s 00:04:57.674 user 0m7.139s 00:04:57.674 sys 0m0.410s 00:04:57.674 10:22:58 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:57.674 10:22:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:57.674 10:22:58 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:57.674 10:22:58 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:57.674 10:22:58 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:57.674 10:22:58 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:57.674 10:22:58 event -- common/autotest_common.sh@10 -- # set +x 00:04:57.674 ************************************ 00:04:57.674 START TEST app_repeat 00:04:57.674 ************************************ 00:04:57.674 10:22:58 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:04:57.674 10:22:58 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.674 10:22:58 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.674 10:22:58 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:57.674 10:22:58 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:57.674 10:22:58 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:57.674 10:22:58 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:57.674 10:22:58 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:57.674 10:22:58 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58196 00:04:57.674 10:22:58 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:57.674 10:22:58 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:57.674 Process app_repeat pid: 58196 00:04:57.674 spdk_app_start Round 0 00:04:57.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:57.674 10:22:58 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58196' 00:04:57.674 10:22:58 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:57.674 10:22:58 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:57.674 10:22:58 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58196 /var/tmp/spdk-nbd.sock 00:04:57.674 10:22:58 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58196 ']' 00:04:57.674 10:22:58 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:57.674 10:22:58 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:57.674 10:22:58 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:57.674 10:22:58 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:57.674 10:22:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:57.674 [2024-11-15 10:22:58.370075] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:04:57.674 [2024-11-15 10:22:58.370401] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58196 ] 00:04:57.674 [2024-11-15 10:22:58.513951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:57.933 [2024-11-15 10:22:58.565004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:57.933 [2024-11-15 10:22:58.565013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.933 [2024-11-15 10:22:58.620589] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:57.933 10:22:58 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:57.933 10:22:58 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:04:57.933 10:22:58 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:58.211 Malloc0 00:04:58.211 10:22:58 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:58.470 Malloc1 00:04:58.470 10:22:59 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:58.470 10:22:59 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.470 10:22:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:58.470 10:22:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:58.470 10:22:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.470 10:22:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:58.470 10:22:59 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:58.470 10:22:59 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.470 10:22:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:58.470 10:22:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:58.470 10:22:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.470 10:22:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:58.470 10:22:59 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:58.470 10:22:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:58.470 10:22:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:58.470 10:22:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:59.036 /dev/nbd0 00:04:59.036 10:22:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:59.036 10:22:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:59.036 10:22:59 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:04:59.036 10:22:59 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:59.036 10:22:59 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:59.036 10:22:59 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:59.036 10:22:59 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:04:59.036 10:22:59 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:59.036 10:22:59 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:59.036 10:22:59 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:59.036 10:22:59 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:59.036 1+0 records in 00:04:59.036 1+0 records out 00:04:59.036 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000361335 s, 11.3 MB/s 00:04:59.036 10:22:59 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:59.036 10:22:59 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:59.036 10:22:59 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:59.036 10:22:59 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:59.036 10:22:59 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:59.036 10:22:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:59.036 10:22:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:59.036 10:22:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:59.294 /dev/nbd1 00:04:59.294 10:22:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:59.294 10:22:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:59.294 10:22:59 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:04:59.294 10:22:59 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:59.294 10:22:59 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:59.294 10:22:59 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:59.294 10:22:59 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:04:59.294 10:22:59 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:59.294 10:22:59 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:59.294 10:22:59 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:59.294 10:22:59 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:59.294 1+0 records in 00:04:59.294 1+0 records out 00:04:59.294 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000232928 s, 17.6 MB/s 00:04:59.294 10:22:59 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:59.294 10:22:59 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:59.294 10:22:59 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:59.294 10:22:59 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:59.294 10:22:59 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:59.294 10:22:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:59.294 10:22:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:59.294 10:22:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:59.294 10:22:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.294 10:22:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:59.552 10:23:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:59.552 { 00:04:59.552 "nbd_device": "/dev/nbd0", 00:04:59.552 "bdev_name": "Malloc0" 00:04:59.552 }, 00:04:59.552 { 00:04:59.552 "nbd_device": "/dev/nbd1", 00:04:59.552 "bdev_name": "Malloc1" 00:04:59.552 } 00:04:59.552 ]' 00:04:59.552 10:23:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:59.552 { 00:04:59.552 "nbd_device": "/dev/nbd0", 00:04:59.552 "bdev_name": "Malloc0" 00:04:59.552 }, 00:04:59.552 { 00:04:59.552 "nbd_device": "/dev/nbd1", 00:04:59.552 "bdev_name": "Malloc1" 00:04:59.552 } 00:04:59.552 ]' 00:04:59.552 10:23:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:59.552 10:23:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:59.552 /dev/nbd1' 00:04:59.552 10:23:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:59.552 /dev/nbd1' 00:04:59.552 10:23:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:59.552 10:23:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:59.552 10:23:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:59.552 10:23:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:59.552 10:23:00 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:59.552 10:23:00 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:59.552 10:23:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.552 10:23:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:59.552 10:23:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:59.552 10:23:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:59.552 10:23:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:59.552 10:23:00 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:59.552 256+0 records in 00:04:59.552 256+0 records out 00:04:59.552 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107234 s, 97.8 MB/s 00:04:59.552 10:23:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:59.552 10:23:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:59.552 256+0 records in 00:04:59.552 256+0 records out 00:04:59.552 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.023499 s, 44.6 MB/s 00:04:59.552 10:23:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:59.552 10:23:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:59.552 256+0 records in 00:04:59.552 256+0 records out 00:04:59.552 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.027191 s, 38.6 MB/s 00:04:59.552 10:23:00 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:59.552 10:23:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.552 10:23:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:59.552 10:23:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:59.552 10:23:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:59.552 10:23:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:59.552 10:23:00 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:59.552 10:23:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:59.553 10:23:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:59.553 10:23:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:59.553 10:23:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:59.553 10:23:00 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:59.553 10:23:00 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:59.553 10:23:00 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.553 10:23:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.553 10:23:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:59.553 10:23:00 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:59.553 10:23:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:59.553 10:23:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:59.811 10:23:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:59.811 10:23:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:59.811 10:23:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:59.811 10:23:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:59.811 10:23:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:59.811 10:23:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:59.811 10:23:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:59.811 10:23:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:59.811 10:23:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:59.811 10:23:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:00.070 10:23:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:00.070 10:23:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:00.070 10:23:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:00.070 10:23:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:00.070 10:23:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:00.070 10:23:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:00.070 10:23:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:00.070 10:23:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:00.070 10:23:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:00.070 10:23:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.070 10:23:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:00.637 10:23:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:00.637 10:23:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:00.637 10:23:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:00.637 10:23:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:00.637 10:23:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:00.637 10:23:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:00.637 10:23:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:00.637 10:23:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:00.637 10:23:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:00.637 10:23:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:00.637 10:23:01 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:00.637 10:23:01 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:00.637 10:23:01 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:00.896 10:23:01 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:01.155 [2024-11-15 10:23:01.750733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:01.155 [2024-11-15 10:23:01.790217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:01.155 [2024-11-15 10:23:01.790221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.155 [2024-11-15 10:23:01.845021] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:01.155 [2024-11-15 10:23:01.845135] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:01.155 [2024-11-15 10:23:01.845148] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:04.449 10:23:04 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:04.449 10:23:04 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:04.449 spdk_app_start Round 1 00:05:04.449 10:23:04 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58196 /var/tmp/spdk-nbd.sock 00:05:04.449 10:23:04 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58196 ']' 00:05:04.449 10:23:04 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:04.449 10:23:04 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:04.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:04.449 10:23:04 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:04.449 10:23:04 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:04.449 10:23:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:04.449 10:23:04 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:04.449 10:23:04 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:04.449 10:23:04 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:04.449 Malloc0 00:05:04.449 10:23:05 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:04.709 Malloc1 00:05:04.709 10:23:05 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:04.709 10:23:05 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.709 10:23:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:04.709 10:23:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:04.709 10:23:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.709 10:23:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:04.709 10:23:05 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:04.709 10:23:05 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.709 10:23:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:04.709 10:23:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:04.709 10:23:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.709 10:23:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:04.709 10:23:05 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:04.709 10:23:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:04.709 10:23:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:04.709 10:23:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:04.968 /dev/nbd0 00:05:04.968 10:23:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:04.968 10:23:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:04.968 10:23:05 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:04.968 10:23:05 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:04.968 10:23:05 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:04.968 10:23:05 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:04.968 10:23:05 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:04.968 10:23:05 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:04.968 10:23:05 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:04.968 10:23:05 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:04.968 10:23:05 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:04.968 1+0 records in 00:05:04.968 1+0 records out 00:05:04.968 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000351877 s, 11.6 MB/s 00:05:04.968 10:23:05 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:04.968 10:23:05 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:04.968 10:23:05 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:04.968 10:23:05 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:04.968 10:23:05 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:04.968 10:23:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:04.968 10:23:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:04.968 10:23:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:05.227 /dev/nbd1 00:05:05.227 10:23:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:05.227 10:23:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:05.227 10:23:05 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:05.227 10:23:05 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:05.227 10:23:05 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:05.227 10:23:05 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:05.227 10:23:05 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:05.227 10:23:05 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:05.227 10:23:05 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:05.227 10:23:05 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:05.227 10:23:05 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:05.227 1+0 records in 00:05:05.227 1+0 records out 00:05:05.227 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276455 s, 14.8 MB/s 00:05:05.227 10:23:05 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:05.227 10:23:05 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:05.227 10:23:05 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:05.227 10:23:05 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:05.227 10:23:05 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:05.227 10:23:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:05.227 10:23:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:05.227 10:23:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:05.227 10:23:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.227 10:23:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:05.486 10:23:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:05.486 { 00:05:05.486 "nbd_device": "/dev/nbd0", 00:05:05.486 "bdev_name": "Malloc0" 00:05:05.486 }, 00:05:05.486 { 00:05:05.486 "nbd_device": "/dev/nbd1", 00:05:05.486 "bdev_name": "Malloc1" 00:05:05.486 } 00:05:05.486 ]' 00:05:05.486 10:23:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:05.486 10:23:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:05.486 { 00:05:05.486 "nbd_device": "/dev/nbd0", 00:05:05.486 "bdev_name": "Malloc0" 00:05:05.486 }, 00:05:05.486 { 00:05:05.486 "nbd_device": "/dev/nbd1", 00:05:05.486 "bdev_name": "Malloc1" 00:05:05.486 } 00:05:05.486 ]' 00:05:05.486 10:23:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:05.486 /dev/nbd1' 00:05:05.486 10:23:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:05.486 /dev/nbd1' 00:05:05.486 10:23:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:05.486 10:23:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:05.486 10:23:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:05.486 10:23:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:05.486 10:23:06 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:05.486 10:23:06 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:05.486 10:23:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.486 10:23:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:05.486 10:23:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:05.486 10:23:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:05.486 10:23:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:05.486 10:23:06 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:05.745 256+0 records in 00:05:05.745 256+0 records out 00:05:05.745 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00790018 s, 133 MB/s 00:05:05.745 10:23:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:05.745 10:23:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:05.745 256+0 records in 00:05:05.745 256+0 records out 00:05:05.745 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0195307 s, 53.7 MB/s 00:05:05.745 10:23:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:05.745 10:23:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:05.745 256+0 records in 00:05:05.745 256+0 records out 00:05:05.745 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0254517 s, 41.2 MB/s 00:05:05.745 10:23:06 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:05.745 10:23:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.745 10:23:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:05.745 10:23:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:05.745 10:23:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:05.745 10:23:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:05.745 10:23:06 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:05.745 10:23:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:05.745 10:23:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:05.745 10:23:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:05.745 10:23:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:05.745 10:23:06 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:05.745 10:23:06 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:05.745 10:23:06 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.745 10:23:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.745 10:23:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:05.745 10:23:06 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:05.745 10:23:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:05.745 10:23:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:06.004 10:23:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:06.004 10:23:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:06.004 10:23:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:06.004 10:23:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:06.004 10:23:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:06.004 10:23:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:06.004 10:23:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:06.004 10:23:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:06.004 10:23:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:06.004 10:23:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:06.263 10:23:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:06.263 10:23:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:06.263 10:23:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:06.263 10:23:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:06.263 10:23:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:06.263 10:23:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:06.263 10:23:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:06.263 10:23:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:06.263 10:23:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:06.263 10:23:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.263 10:23:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:06.522 10:23:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:06.522 10:23:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:06.522 10:23:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:06.781 10:23:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:06.781 10:23:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:06.781 10:23:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:06.781 10:23:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:06.781 10:23:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:06.781 10:23:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:06.781 10:23:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:06.781 10:23:07 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:06.781 10:23:07 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:06.781 10:23:07 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:07.040 10:23:07 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:07.299 [2024-11-15 10:23:07.900433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:07.299 [2024-11-15 10:23:07.939336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:07.299 [2024-11-15 10:23:07.939342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.299 [2024-11-15 10:23:07.995825] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:07.299 [2024-11-15 10:23:07.995970] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:07.299 [2024-11-15 10:23:07.995983] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:10.586 spdk_app_start Round 2 00:05:10.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:10.586 10:23:10 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:10.586 10:23:10 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:10.586 10:23:10 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58196 /var/tmp/spdk-nbd.sock 00:05:10.586 10:23:10 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58196 ']' 00:05:10.586 10:23:10 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:10.586 10:23:10 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:10.586 10:23:10 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:10.586 10:23:10 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:10.586 10:23:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:10.586 10:23:11 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:10.586 10:23:11 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:10.586 10:23:11 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:10.586 Malloc0 00:05:10.586 10:23:11 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:10.845 Malloc1 00:05:10.845 10:23:11 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:10.845 10:23:11 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.845 10:23:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:10.845 10:23:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:10.845 10:23:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.845 10:23:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:10.845 10:23:11 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:10.845 10:23:11 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.845 10:23:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:10.845 10:23:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:10.845 10:23:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.845 10:23:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:10.845 10:23:11 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:10.845 10:23:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:10.845 10:23:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:10.845 10:23:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:11.105 /dev/nbd0 00:05:11.105 10:23:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:11.105 10:23:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:11.105 10:23:11 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:11.105 10:23:11 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:11.105 10:23:11 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:11.105 10:23:11 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:11.105 10:23:11 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:11.105 10:23:11 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:11.105 10:23:11 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:11.105 10:23:11 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:11.105 10:23:11 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:11.105 1+0 records in 00:05:11.105 1+0 records out 00:05:11.105 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000284929 s, 14.4 MB/s 00:05:11.105 10:23:11 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:11.105 10:23:11 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:11.105 10:23:11 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:11.105 10:23:11 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:11.105 10:23:11 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:11.105 10:23:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:11.105 10:23:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:11.105 10:23:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:11.365 /dev/nbd1 00:05:11.365 10:23:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:11.365 10:23:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:11.365 10:23:12 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:11.365 10:23:12 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:11.365 10:23:12 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:11.365 10:23:12 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:11.365 10:23:12 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:11.365 10:23:12 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:11.365 10:23:12 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:11.365 10:23:12 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:11.365 10:23:12 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:11.365 1+0 records in 00:05:11.365 1+0 records out 00:05:11.365 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000265226 s, 15.4 MB/s 00:05:11.365 10:23:12 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:11.365 10:23:12 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:11.365 10:23:12 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:11.365 10:23:12 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:11.365 10:23:12 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:11.365 10:23:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:11.365 10:23:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:11.365 10:23:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:11.365 10:23:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.365 10:23:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:11.625 10:23:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:11.625 { 00:05:11.625 "nbd_device": "/dev/nbd0", 00:05:11.625 "bdev_name": "Malloc0" 00:05:11.625 }, 00:05:11.625 { 00:05:11.625 "nbd_device": "/dev/nbd1", 00:05:11.625 "bdev_name": "Malloc1" 00:05:11.625 } 00:05:11.625 ]' 00:05:11.625 10:23:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:11.625 10:23:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:11.625 { 00:05:11.625 "nbd_device": "/dev/nbd0", 00:05:11.625 "bdev_name": "Malloc0" 00:05:11.625 }, 00:05:11.625 { 00:05:11.625 "nbd_device": "/dev/nbd1", 00:05:11.625 "bdev_name": "Malloc1" 00:05:11.625 } 00:05:11.625 ]' 00:05:11.625 10:23:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:11.625 /dev/nbd1' 00:05:11.625 10:23:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:11.625 /dev/nbd1' 00:05:11.625 10:23:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:11.625 10:23:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:11.625 10:23:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:11.625 10:23:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:11.625 10:23:12 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:11.625 10:23:12 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:11.625 10:23:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.625 10:23:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:11.625 10:23:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:11.625 10:23:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:11.625 10:23:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:11.625 10:23:12 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:11.625 256+0 records in 00:05:11.625 256+0 records out 00:05:11.625 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103532 s, 101 MB/s 00:05:11.625 10:23:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:11.625 10:23:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:11.884 256+0 records in 00:05:11.884 256+0 records out 00:05:11.884 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0200338 s, 52.3 MB/s 00:05:11.884 10:23:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:11.884 10:23:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:11.884 256+0 records in 00:05:11.884 256+0 records out 00:05:11.884 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0244901 s, 42.8 MB/s 00:05:11.884 10:23:12 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:11.884 10:23:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.884 10:23:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:11.884 10:23:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:11.884 10:23:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:11.884 10:23:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:11.884 10:23:12 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:11.884 10:23:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:11.884 10:23:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:11.884 10:23:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:11.884 10:23:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:11.884 10:23:12 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:11.884 10:23:12 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:11.884 10:23:12 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.884 10:23:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.884 10:23:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:11.884 10:23:12 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:11.884 10:23:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:11.884 10:23:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:12.143 10:23:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:12.143 10:23:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:12.143 10:23:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:12.143 10:23:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:12.143 10:23:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:12.143 10:23:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:12.143 10:23:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:12.143 10:23:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:12.143 10:23:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:12.143 10:23:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:12.403 10:23:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:12.403 10:23:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:12.403 10:23:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:12.403 10:23:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:12.403 10:23:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:12.403 10:23:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:12.403 10:23:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:12.403 10:23:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:12.403 10:23:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:12.403 10:23:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.403 10:23:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:12.662 10:23:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:12.662 10:23:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:12.662 10:23:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:12.662 10:23:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:12.662 10:23:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:12.662 10:23:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:12.662 10:23:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:12.662 10:23:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:12.662 10:23:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:12.662 10:23:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:12.662 10:23:13 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:12.662 10:23:13 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:12.662 10:23:13 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:13.230 10:23:13 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:13.230 [2024-11-15 10:23:14.003749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:13.230 [2024-11-15 10:23:14.044968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.230 [2024-11-15 10:23:14.044986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.490 [2024-11-15 10:23:14.097150] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:13.490 [2024-11-15 10:23:14.097267] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:13.490 [2024-11-15 10:23:14.097280] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:16.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:16.024 10:23:16 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58196 /var/tmp/spdk-nbd.sock 00:05:16.024 10:23:16 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58196 ']' 00:05:16.024 10:23:16 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:16.024 10:23:16 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:16.024 10:23:16 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:16.024 10:23:16 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:16.024 10:23:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:16.592 10:23:17 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:16.592 10:23:17 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:16.592 10:23:17 event.app_repeat -- event/event.sh@39 -- # killprocess 58196 00:05:16.592 10:23:17 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 58196 ']' 00:05:16.592 10:23:17 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 58196 00:05:16.592 10:23:17 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:05:16.592 10:23:17 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:16.592 10:23:17 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58196 00:05:16.592 killing process with pid 58196 00:05:16.592 10:23:17 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:16.592 10:23:17 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:16.592 10:23:17 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58196' 00:05:16.592 10:23:17 event.app_repeat -- common/autotest_common.sh@971 -- # kill 58196 00:05:16.592 10:23:17 event.app_repeat -- common/autotest_common.sh@976 -- # wait 58196 00:05:16.592 spdk_app_start is called in Round 0. 00:05:16.592 Shutdown signal received, stop current app iteration 00:05:16.592 Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 reinitialization... 00:05:16.592 spdk_app_start is called in Round 1. 00:05:16.592 Shutdown signal received, stop current app iteration 00:05:16.592 Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 reinitialization... 00:05:16.592 spdk_app_start is called in Round 2. 00:05:16.592 Shutdown signal received, stop current app iteration 00:05:16.592 Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 reinitialization... 00:05:16.592 spdk_app_start is called in Round 3. 00:05:16.592 Shutdown signal received, stop current app iteration 00:05:16.592 10:23:17 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:16.592 10:23:17 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:16.592 00:05:16.592 real 0m19.024s 00:05:16.592 user 0m43.459s 00:05:16.592 sys 0m2.869s 00:05:16.592 ************************************ 00:05:16.592 END TEST app_repeat 00:05:16.592 ************************************ 00:05:16.592 10:23:17 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:16.592 10:23:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:16.592 10:23:17 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:16.592 10:23:17 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:16.592 10:23:17 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:16.592 10:23:17 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:16.592 10:23:17 event -- common/autotest_common.sh@10 -- # set +x 00:05:16.592 ************************************ 00:05:16.592 START TEST cpu_locks 00:05:16.592 ************************************ 00:05:16.592 10:23:17 event.cpu_locks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:16.851 * Looking for test storage... 00:05:16.851 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:16.851 10:23:17 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:16.851 10:23:17 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:05:16.851 10:23:17 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:16.851 10:23:17 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:16.851 10:23:17 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:16.851 10:23:17 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:16.851 10:23:17 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:16.851 10:23:17 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:16.851 10:23:17 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:16.851 10:23:17 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:16.851 10:23:17 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:16.851 10:23:17 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:16.851 10:23:17 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:16.851 10:23:17 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:16.851 10:23:17 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:16.851 10:23:17 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:16.851 10:23:17 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:16.851 10:23:17 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:16.851 10:23:17 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:16.851 10:23:17 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:16.851 10:23:17 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:16.851 10:23:17 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:16.851 10:23:17 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:16.851 10:23:17 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:16.851 10:23:17 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:16.851 10:23:17 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:16.851 10:23:17 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:16.851 10:23:17 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:16.851 10:23:17 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:16.851 10:23:17 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:16.851 10:23:17 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:16.851 10:23:17 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:16.851 10:23:17 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:16.851 10:23:17 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:16.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.851 --rc genhtml_branch_coverage=1 00:05:16.851 --rc genhtml_function_coverage=1 00:05:16.851 --rc genhtml_legend=1 00:05:16.851 --rc geninfo_all_blocks=1 00:05:16.851 --rc geninfo_unexecuted_blocks=1 00:05:16.851 00:05:16.851 ' 00:05:16.851 10:23:17 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:16.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.851 --rc genhtml_branch_coverage=1 00:05:16.851 --rc genhtml_function_coverage=1 00:05:16.851 --rc genhtml_legend=1 00:05:16.851 --rc geninfo_all_blocks=1 00:05:16.851 --rc geninfo_unexecuted_blocks=1 00:05:16.851 00:05:16.851 ' 00:05:16.851 10:23:17 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:16.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.851 --rc genhtml_branch_coverage=1 00:05:16.851 --rc genhtml_function_coverage=1 00:05:16.851 --rc genhtml_legend=1 00:05:16.851 --rc geninfo_all_blocks=1 00:05:16.851 --rc geninfo_unexecuted_blocks=1 00:05:16.851 00:05:16.851 ' 00:05:16.851 10:23:17 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:16.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.851 --rc genhtml_branch_coverage=1 00:05:16.851 --rc genhtml_function_coverage=1 00:05:16.851 --rc genhtml_legend=1 00:05:16.851 --rc geninfo_all_blocks=1 00:05:16.851 --rc geninfo_unexecuted_blocks=1 00:05:16.851 00:05:16.851 ' 00:05:16.851 10:23:17 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:16.851 10:23:17 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:16.851 10:23:17 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:16.851 10:23:17 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:16.851 10:23:17 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:16.851 10:23:17 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:16.851 10:23:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:16.851 ************************************ 00:05:16.851 START TEST default_locks 00:05:16.851 ************************************ 00:05:16.851 10:23:17 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:05:16.851 10:23:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58640 00:05:16.851 10:23:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58640 00:05:16.851 10:23:17 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 58640 ']' 00:05:16.852 10:23:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:16.852 10:23:17 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.852 10:23:17 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:16.852 10:23:17 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.852 10:23:17 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:16.852 10:23:17 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:16.852 [2024-11-15 10:23:17.662586] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:05:16.852 [2024-11-15 10:23:17.662672] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58640 ] 00:05:17.113 [2024-11-15 10:23:17.805720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.113 [2024-11-15 10:23:17.855283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.113 [2024-11-15 10:23:17.925723] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:17.373 10:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:17.373 10:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:05:17.373 10:23:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58640 00:05:17.373 10:23:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58640 00:05:17.373 10:23:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:17.942 10:23:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58640 00:05:17.942 10:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 58640 ']' 00:05:17.942 10:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 58640 00:05:17.942 10:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:05:17.942 10:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:17.942 10:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58640 00:05:17.942 10:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:17.942 killing process with pid 58640 00:05:17.942 10:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:17.942 10:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58640' 00:05:17.942 10:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 58640 00:05:17.942 10:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 58640 00:05:18.201 10:23:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58640 00:05:18.201 10:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:18.201 10:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58640 00:05:18.201 10:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:18.201 10:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:18.201 10:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:18.201 10:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:18.201 10:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 58640 00:05:18.201 10:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 58640 ']' 00:05:18.201 10:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.201 10:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:18.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.201 10:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.201 10:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:18.201 10:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:18.201 ERROR: process (pid: 58640) is no longer running 00:05:18.201 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (58640) - No such process 00:05:18.201 10:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:18.201 10:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:05:18.201 10:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:18.201 10:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:18.201 10:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:18.201 10:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:18.201 10:23:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:18.201 10:23:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:18.201 10:23:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:18.201 10:23:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:18.201 00:05:18.201 real 0m1.370s 00:05:18.201 user 0m1.344s 00:05:18.201 sys 0m0.519s 00:05:18.201 10:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:18.201 ************************************ 00:05:18.201 END TEST default_locks 00:05:18.201 ************************************ 00:05:18.201 10:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:18.201 10:23:19 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:18.201 10:23:19 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:18.201 10:23:19 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:18.201 10:23:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:18.201 ************************************ 00:05:18.201 START TEST default_locks_via_rpc 00:05:18.201 ************************************ 00:05:18.201 10:23:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:05:18.201 10:23:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58679 00:05:18.201 10:23:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58679 00:05:18.201 10:23:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:18.201 10:23:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 58679 ']' 00:05:18.201 10:23:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.201 10:23:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:18.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.201 10:23:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.201 10:23:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:18.201 10:23:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.460 [2024-11-15 10:23:19.087953] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:05:18.460 [2024-11-15 10:23:19.088073] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58679 ] 00:05:18.460 [2024-11-15 10:23:19.230374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.460 [2024-11-15 10:23:19.282555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.719 [2024-11-15 10:23:19.353121] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:18.719 10:23:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:18.719 10:23:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:18.720 10:23:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:18.720 10:23:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.720 10:23:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.720 10:23:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.720 10:23:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:18.720 10:23:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:18.720 10:23:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:18.720 10:23:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:18.720 10:23:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:18.720 10:23:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.720 10:23:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.720 10:23:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.720 10:23:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58679 00:05:18.720 10:23:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:18.720 10:23:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58679 00:05:19.287 10:23:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58679 00:05:19.287 10:23:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 58679 ']' 00:05:19.287 10:23:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 58679 00:05:19.287 10:23:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:05:19.287 10:23:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:19.287 10:23:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58679 00:05:19.287 10:23:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:19.287 killing process with pid 58679 00:05:19.287 10:23:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:19.287 10:23:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58679' 00:05:19.287 10:23:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 58679 00:05:19.287 10:23:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 58679 00:05:19.856 00:05:19.856 real 0m1.414s 00:05:19.856 user 0m1.374s 00:05:19.856 sys 0m0.546s 00:05:19.856 10:23:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:19.856 10:23:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.856 ************************************ 00:05:19.856 END TEST default_locks_via_rpc 00:05:19.856 ************************************ 00:05:19.856 10:23:20 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:19.856 10:23:20 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:19.856 10:23:20 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:19.856 10:23:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:19.856 ************************************ 00:05:19.856 START TEST non_locking_app_on_locked_coremask 00:05:19.856 ************************************ 00:05:19.856 10:23:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:05:19.856 10:23:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58723 00:05:19.856 10:23:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:19.856 10:23:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58723 /var/tmp/spdk.sock 00:05:19.856 10:23:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58723 ']' 00:05:19.856 10:23:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.856 10:23:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:19.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.856 10:23:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.856 10:23:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:19.856 10:23:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:19.856 [2024-11-15 10:23:20.554968] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:05:19.856 [2024-11-15 10:23:20.555094] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58723 ] 00:05:19.856 [2024-11-15 10:23:20.693161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.114 [2024-11-15 10:23:20.737317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.114 [2024-11-15 10:23:20.806216] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:20.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:20.373 10:23:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:20.373 10:23:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:20.373 10:23:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58731 00:05:20.373 10:23:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:20.373 10:23:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58731 /var/tmp/spdk2.sock 00:05:20.373 10:23:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58731 ']' 00:05:20.373 10:23:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:20.373 10:23:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:20.373 10:23:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:20.373 10:23:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:20.373 10:23:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:20.373 [2024-11-15 10:23:21.068898] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:05:20.373 [2024-11-15 10:23:21.069267] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58731 ] 00:05:20.373 [2024-11-15 10:23:21.224906] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:20.373 [2024-11-15 10:23:21.224962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.632 [2024-11-15 10:23:21.333503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.632 [2024-11-15 10:23:21.469890] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:21.199 10:23:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:21.199 10:23:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:21.199 10:23:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58723 00:05:21.199 10:23:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:21.199 10:23:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58723 00:05:22.135 10:23:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58723 00:05:22.135 10:23:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58723 ']' 00:05:22.135 10:23:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 58723 00:05:22.135 10:23:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:22.135 10:23:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:22.135 10:23:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58723 00:05:22.135 killing process with pid 58723 00:05:22.135 10:23:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:22.135 10:23:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:22.135 10:23:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58723' 00:05:22.135 10:23:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 58723 00:05:22.135 10:23:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 58723 00:05:23.070 10:23:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58731 00:05:23.070 10:23:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58731 ']' 00:05:23.070 10:23:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 58731 00:05:23.070 10:23:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:23.070 10:23:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:23.070 10:23:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58731 00:05:23.070 killing process with pid 58731 00:05:23.070 10:23:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:23.070 10:23:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:23.070 10:23:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58731' 00:05:23.070 10:23:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 58731 00:05:23.070 10:23:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 58731 00:05:23.329 ************************************ 00:05:23.329 END TEST non_locking_app_on_locked_coremask 00:05:23.329 ************************************ 00:05:23.329 00:05:23.329 real 0m3.574s 00:05:23.329 user 0m3.883s 00:05:23.329 sys 0m1.076s 00:05:23.329 10:23:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:23.329 10:23:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:23.329 10:23:24 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:23.329 10:23:24 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:23.329 10:23:24 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:23.329 10:23:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:23.329 ************************************ 00:05:23.329 START TEST locking_app_on_unlocked_coremask 00:05:23.329 ************************************ 00:05:23.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.329 10:23:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:05:23.329 10:23:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58798 00:05:23.329 10:23:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 58798 /var/tmp/spdk.sock 00:05:23.329 10:23:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58798 ']' 00:05:23.329 10:23:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.329 10:23:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:23.329 10:23:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:23.329 10:23:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.329 10:23:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:23.329 10:23:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:23.587 [2024-11-15 10:23:24.192251] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:05:23.587 [2024-11-15 10:23:24.192366] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58798 ] 00:05:23.587 [2024-11-15 10:23:24.342208] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:23.587 [2024-11-15 10:23:24.342263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.587 [2024-11-15 10:23:24.396661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.846 [2024-11-15 10:23:24.466513] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:24.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:24.442 10:23:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:24.442 10:23:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:24.442 10:23:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=58814 00:05:24.442 10:23:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 58814 /var/tmp/spdk2.sock 00:05:24.442 10:23:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:24.442 10:23:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58814 ']' 00:05:24.442 10:23:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:24.442 10:23:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:24.442 10:23:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:24.442 10:23:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:24.442 10:23:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:24.442 [2024-11-15 10:23:25.195493] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:05:24.442 [2024-11-15 10:23:25.195824] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58814 ] 00:05:24.708 [2024-11-15 10:23:25.359887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.708 [2024-11-15 10:23:25.473054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.964 [2024-11-15 10:23:25.608547] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:25.529 10:23:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:25.529 10:23:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:25.529 10:23:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 58814 00:05:25.529 10:23:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58814 00:05:25.529 10:23:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:26.463 10:23:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 58798 00:05:26.463 10:23:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58798 ']' 00:05:26.463 10:23:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 58798 00:05:26.463 10:23:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:26.463 10:23:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:26.463 10:23:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58798 00:05:26.463 killing process with pid 58798 00:05:26.463 10:23:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:26.463 10:23:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:26.463 10:23:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58798' 00:05:26.463 10:23:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 58798 00:05:26.463 10:23:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 58798 00:05:27.030 10:23:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 58814 00:05:27.030 10:23:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58814 ']' 00:05:27.030 10:23:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 58814 00:05:27.030 10:23:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:27.030 10:23:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:27.030 10:23:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58814 00:05:27.030 killing process with pid 58814 00:05:27.030 10:23:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:27.030 10:23:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:27.030 10:23:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58814' 00:05:27.030 10:23:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 58814 00:05:27.030 10:23:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 58814 00:05:27.596 00:05:27.596 real 0m4.037s 00:05:27.596 user 0m4.533s 00:05:27.596 sys 0m1.101s 00:05:27.596 10:23:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:27.596 ************************************ 00:05:27.596 END TEST locking_app_on_unlocked_coremask 00:05:27.596 ************************************ 00:05:27.596 10:23:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:27.596 10:23:28 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:27.596 10:23:28 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:27.596 10:23:28 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:27.596 10:23:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:27.596 ************************************ 00:05:27.596 START TEST locking_app_on_locked_coremask 00:05:27.596 ************************************ 00:05:27.596 10:23:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:05:27.596 10:23:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=58881 00:05:27.596 10:23:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 58881 /var/tmp/spdk.sock 00:05:27.596 10:23:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:27.596 10:23:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58881 ']' 00:05:27.596 10:23:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.596 10:23:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:27.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.596 10:23:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.596 10:23:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:27.596 10:23:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:27.596 [2024-11-15 10:23:28.284519] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:05:27.596 [2024-11-15 10:23:28.284641] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58881 ] 00:05:27.596 [2024-11-15 10:23:28.431763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.854 [2024-11-15 10:23:28.488523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.854 [2024-11-15 10:23:28.556575] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:28.112 10:23:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:28.112 10:23:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:28.112 10:23:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=58890 00:05:28.112 10:23:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 58890 /var/tmp/spdk2.sock 00:05:28.112 10:23:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:28.112 10:23:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58890 /var/tmp/spdk2.sock 00:05:28.112 10:23:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:28.112 10:23:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:28.112 10:23:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:28.112 10:23:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:28.112 10:23:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:28.112 10:23:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 58890 /var/tmp/spdk2.sock 00:05:28.112 10:23:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58890 ']' 00:05:28.112 10:23:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:28.112 10:23:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:28.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:28.112 10:23:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:28.112 10:23:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:28.112 10:23:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:28.112 [2024-11-15 10:23:28.802175] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:05:28.112 [2024-11-15 10:23:28.802281] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58890 ] 00:05:28.112 [2024-11-15 10:23:28.956515] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 58881 has claimed it. 00:05:28.112 [2024-11-15 10:23:28.956608] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:29.047 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (58890) - No such process 00:05:29.047 ERROR: process (pid: 58890) is no longer running 00:05:29.047 10:23:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:29.047 10:23:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:05:29.047 10:23:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:29.047 10:23:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:29.047 10:23:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:29.047 10:23:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:29.047 10:23:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 58881 00:05:29.047 10:23:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58881 00:05:29.047 10:23:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:29.304 10:23:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 58881 00:05:29.304 10:23:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58881 ']' 00:05:29.304 10:23:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 58881 00:05:29.304 10:23:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:29.304 10:23:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:29.304 10:23:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58881 00:05:29.304 killing process with pid 58881 00:05:29.304 10:23:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:29.304 10:23:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:29.304 10:23:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58881' 00:05:29.304 10:23:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 58881 00:05:29.304 10:23:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 58881 00:05:29.562 00:05:29.562 real 0m2.199s 00:05:29.562 user 0m2.460s 00:05:29.562 sys 0m0.619s 00:05:29.562 10:23:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:29.562 10:23:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:29.562 ************************************ 00:05:29.562 END TEST locking_app_on_locked_coremask 00:05:29.562 ************************************ 00:05:29.821 10:23:30 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:29.821 10:23:30 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:29.821 10:23:30 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:29.821 10:23:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:29.821 ************************************ 00:05:29.821 START TEST locking_overlapped_coremask 00:05:29.821 ************************************ 00:05:29.821 10:23:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:05:29.821 10:23:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=58935 00:05:29.821 10:23:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 58935 /var/tmp/spdk.sock 00:05:29.821 10:23:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:29.821 10:23:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 58935 ']' 00:05:29.821 10:23:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.821 10:23:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:29.821 10:23:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.821 10:23:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:29.821 10:23:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:29.821 [2024-11-15 10:23:30.539469] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:05:29.821 [2024-11-15 10:23:30.539592] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58935 ] 00:05:30.078 [2024-11-15 10:23:30.679249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:30.078 [2024-11-15 10:23:30.730344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.078 [2024-11-15 10:23:30.730399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:30.078 [2024-11-15 10:23:30.730403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.078 [2024-11-15 10:23:30.803157] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:30.337 10:23:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:30.337 10:23:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:30.337 10:23:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=58946 00:05:30.337 10:23:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:30.337 10:23:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 58946 /var/tmp/spdk2.sock 00:05:30.337 10:23:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:30.337 10:23:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58946 /var/tmp/spdk2.sock 00:05:30.337 10:23:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:30.337 10:23:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:30.337 10:23:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:30.337 10:23:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:30.337 10:23:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 58946 /var/tmp/spdk2.sock 00:05:30.337 10:23:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 58946 ']' 00:05:30.337 10:23:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:30.337 10:23:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:30.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:30.337 10:23:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:30.337 10:23:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:30.337 10:23:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:30.337 [2024-11-15 10:23:31.090200] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:05:30.337 [2024-11-15 10:23:31.090297] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58946 ] 00:05:30.594 [2024-11-15 10:23:31.257402] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58935 has claimed it. 00:05:30.594 [2024-11-15 10:23:31.257475] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:31.162 ERROR: process (pid: 58946) is no longer running 00:05:31.162 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (58946) - No such process 00:05:31.162 10:23:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:31.162 10:23:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:05:31.162 10:23:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:31.162 10:23:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:31.162 10:23:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:31.162 10:23:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:31.162 10:23:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:31.162 10:23:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:31.162 10:23:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:31.162 10:23:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:31.162 10:23:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 58935 00:05:31.162 10:23:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 58935 ']' 00:05:31.162 10:23:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 58935 00:05:31.162 10:23:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:05:31.162 10:23:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:31.162 10:23:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58935 00:05:31.162 10:23:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:31.162 10:23:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:31.162 killing process with pid 58935 00:05:31.162 10:23:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58935' 00:05:31.162 10:23:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 58935 00:05:31.162 10:23:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 58935 00:05:31.727 00:05:31.727 real 0m1.811s 00:05:31.727 user 0m4.937s 00:05:31.727 sys 0m0.432s 00:05:31.727 ************************************ 00:05:31.727 END TEST locking_overlapped_coremask 00:05:31.727 ************************************ 00:05:31.727 10:23:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:31.727 10:23:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:31.727 10:23:32 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:31.727 10:23:32 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:31.727 10:23:32 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:31.727 10:23:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:31.727 ************************************ 00:05:31.727 START TEST locking_overlapped_coremask_via_rpc 00:05:31.727 ************************************ 00:05:31.727 10:23:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:05:31.727 10:23:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=58991 00:05:31.727 10:23:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:31.727 10:23:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 58991 /var/tmp/spdk.sock 00:05:31.727 10:23:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 58991 ']' 00:05:31.727 10:23:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.727 10:23:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:31.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.727 10:23:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.727 10:23:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:31.727 10:23:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.727 [2024-11-15 10:23:32.392941] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:05:31.727 [2024-11-15 10:23:32.393078] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58991 ] 00:05:31.727 [2024-11-15 10:23:32.539396] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:31.727 [2024-11-15 10:23:32.539452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:31.984 [2024-11-15 10:23:32.599412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:31.984 [2024-11-15 10:23:32.599597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:31.984 [2024-11-15 10:23:32.599613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.984 [2024-11-15 10:23:32.672802] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:32.241 10:23:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:32.241 10:23:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:32.241 10:23:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59002 00:05:32.241 10:23:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:32.241 10:23:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59002 /var/tmp/spdk2.sock 00:05:32.241 10:23:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59002 ']' 00:05:32.241 10:23:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:32.242 10:23:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:32.242 10:23:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:32.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:32.242 10:23:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:32.242 10:23:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.242 [2024-11-15 10:23:32.944941] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:05:32.242 [2024-11-15 10:23:32.945288] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59002 ] 00:05:32.499 [2024-11-15 10:23:33.112475] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:32.499 [2024-11-15 10:23:33.112531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:32.499 [2024-11-15 10:23:33.239521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:32.499 [2024-11-15 10:23:33.243226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:32.499 [2024-11-15 10:23:33.243228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:32.756 [2024-11-15 10:23:33.385875] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:33.325 10:23:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:33.325 10:23:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:33.325 10:23:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:33.325 10:23:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.325 10:23:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.325 10:23:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.325 10:23:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:33.325 10:23:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:33.325 10:23:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:33.325 10:23:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:33.325 10:23:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:33.325 10:23:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:33.325 10:23:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:33.325 10:23:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:33.325 10:23:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.325 10:23:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.325 [2024-11-15 10:23:34.052158] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58991 has claimed it. 00:05:33.325 request: 00:05:33.325 { 00:05:33.325 "method": "framework_enable_cpumask_locks", 00:05:33.325 "req_id": 1 00:05:33.325 } 00:05:33.325 Got JSON-RPC error response 00:05:33.325 response: 00:05:33.325 { 00:05:33.325 "code": -32603, 00:05:33.325 "message": "Failed to claim CPU core: 2" 00:05:33.325 } 00:05:33.325 10:23:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:33.325 10:23:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:33.325 10:23:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:33.325 10:23:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:33.325 10:23:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:33.325 10:23:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 58991 /var/tmp/spdk.sock 00:05:33.325 10:23:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 58991 ']' 00:05:33.325 10:23:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.325 10:23:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:33.325 10:23:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.325 10:23:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:33.325 10:23:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.584 10:23:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:33.584 10:23:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:33.584 10:23:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59002 /var/tmp/spdk2.sock 00:05:33.584 10:23:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59002 ']' 00:05:33.584 10:23:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:33.584 10:23:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:33.584 10:23:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:33.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:33.584 10:23:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:33.584 10:23:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.151 10:23:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:34.151 10:23:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:34.151 10:23:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:34.151 10:23:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:34.151 10:23:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:34.151 ************************************ 00:05:34.151 END TEST locking_overlapped_coremask_via_rpc 00:05:34.151 ************************************ 00:05:34.151 10:23:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:34.151 00:05:34.151 real 0m2.384s 00:05:34.151 user 0m1.443s 00:05:34.151 sys 0m0.160s 00:05:34.151 10:23:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:34.151 10:23:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.151 10:23:34 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:34.151 10:23:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 58991 ]] 00:05:34.151 10:23:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 58991 00:05:34.151 10:23:34 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 58991 ']' 00:05:34.151 10:23:34 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 58991 00:05:34.151 10:23:34 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:05:34.151 10:23:34 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:34.151 10:23:34 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58991 00:05:34.151 killing process with pid 58991 00:05:34.151 10:23:34 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:34.151 10:23:34 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:34.151 10:23:34 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58991' 00:05:34.151 10:23:34 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 58991 00:05:34.151 10:23:34 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 58991 00:05:34.409 10:23:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59002 ]] 00:05:34.409 10:23:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59002 00:05:34.409 10:23:35 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59002 ']' 00:05:34.409 10:23:35 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59002 00:05:34.409 10:23:35 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:05:34.409 10:23:35 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:34.409 10:23:35 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59002 00:05:34.409 killing process with pid 59002 00:05:34.409 10:23:35 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:05:34.409 10:23:35 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:05:34.409 10:23:35 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59002' 00:05:34.409 10:23:35 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 59002 00:05:34.409 10:23:35 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 59002 00:05:34.977 10:23:35 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:34.977 Process with pid 58991 is not found 00:05:34.977 10:23:35 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:34.977 10:23:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 58991 ]] 00:05:34.977 10:23:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 58991 00:05:34.977 10:23:35 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 58991 ']' 00:05:34.977 10:23:35 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 58991 00:05:34.977 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (58991) - No such process 00:05:34.977 10:23:35 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 58991 is not found' 00:05:34.977 10:23:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59002 ]] 00:05:34.977 10:23:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59002 00:05:34.977 Process with pid 59002 is not found 00:05:34.977 10:23:35 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59002 ']' 00:05:34.977 10:23:35 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59002 00:05:34.977 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (59002) - No such process 00:05:34.977 10:23:35 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 59002 is not found' 00:05:34.977 10:23:35 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:34.977 00:05:34.977 real 0m18.187s 00:05:34.977 user 0m32.422s 00:05:34.977 sys 0m5.386s 00:05:34.977 10:23:35 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:34.977 10:23:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:34.977 ************************************ 00:05:34.977 END TEST cpu_locks 00:05:34.977 ************************************ 00:05:34.977 ************************************ 00:05:34.977 END TEST event 00:05:34.977 ************************************ 00:05:34.977 00:05:34.977 real 0m45.393s 00:05:34.977 user 1m29.593s 00:05:34.977 sys 0m9.075s 00:05:34.977 10:23:35 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:34.977 10:23:35 event -- common/autotest_common.sh@10 -- # set +x 00:05:34.977 10:23:35 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:34.977 10:23:35 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:34.977 10:23:35 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:34.977 10:23:35 -- common/autotest_common.sh@10 -- # set +x 00:05:34.977 ************************************ 00:05:34.977 START TEST thread 00:05:34.977 ************************************ 00:05:34.977 10:23:35 thread -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:34.977 * Looking for test storage... 00:05:34.977 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:34.977 10:23:35 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:34.977 10:23:35 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:34.977 10:23:35 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:05:35.237 10:23:35 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:35.237 10:23:35 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:35.237 10:23:35 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:35.237 10:23:35 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:35.237 10:23:35 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.237 10:23:35 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:35.237 10:23:35 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:35.237 10:23:35 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:35.237 10:23:35 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:35.237 10:23:35 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:35.237 10:23:35 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:35.237 10:23:35 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:35.237 10:23:35 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:35.237 10:23:35 thread -- scripts/common.sh@345 -- # : 1 00:05:35.237 10:23:35 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:35.237 10:23:35 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.237 10:23:35 thread -- scripts/common.sh@365 -- # decimal 1 00:05:35.237 10:23:35 thread -- scripts/common.sh@353 -- # local d=1 00:05:35.237 10:23:35 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.237 10:23:35 thread -- scripts/common.sh@355 -- # echo 1 00:05:35.237 10:23:35 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:35.237 10:23:35 thread -- scripts/common.sh@366 -- # decimal 2 00:05:35.237 10:23:35 thread -- scripts/common.sh@353 -- # local d=2 00:05:35.237 10:23:35 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.237 10:23:35 thread -- scripts/common.sh@355 -- # echo 2 00:05:35.237 10:23:35 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:35.237 10:23:35 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:35.237 10:23:35 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:35.237 10:23:35 thread -- scripts/common.sh@368 -- # return 0 00:05:35.237 10:23:35 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.237 10:23:35 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:35.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.237 --rc genhtml_branch_coverage=1 00:05:35.237 --rc genhtml_function_coverage=1 00:05:35.237 --rc genhtml_legend=1 00:05:35.237 --rc geninfo_all_blocks=1 00:05:35.237 --rc geninfo_unexecuted_blocks=1 00:05:35.237 00:05:35.237 ' 00:05:35.237 10:23:35 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:35.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.237 --rc genhtml_branch_coverage=1 00:05:35.237 --rc genhtml_function_coverage=1 00:05:35.237 --rc genhtml_legend=1 00:05:35.237 --rc geninfo_all_blocks=1 00:05:35.237 --rc geninfo_unexecuted_blocks=1 00:05:35.237 00:05:35.237 ' 00:05:35.237 10:23:35 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:35.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.237 --rc genhtml_branch_coverage=1 00:05:35.237 --rc genhtml_function_coverage=1 00:05:35.237 --rc genhtml_legend=1 00:05:35.237 --rc geninfo_all_blocks=1 00:05:35.237 --rc geninfo_unexecuted_blocks=1 00:05:35.237 00:05:35.237 ' 00:05:35.237 10:23:35 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:35.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.237 --rc genhtml_branch_coverage=1 00:05:35.237 --rc genhtml_function_coverage=1 00:05:35.237 --rc genhtml_legend=1 00:05:35.237 --rc geninfo_all_blocks=1 00:05:35.237 --rc geninfo_unexecuted_blocks=1 00:05:35.237 00:05:35.237 ' 00:05:35.238 10:23:35 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:35.238 10:23:35 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:05:35.238 10:23:35 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:35.238 10:23:35 thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.238 ************************************ 00:05:35.238 START TEST thread_poller_perf 00:05:35.238 ************************************ 00:05:35.238 10:23:35 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:35.238 [2024-11-15 10:23:35.911347] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:05:35.238 [2024-11-15 10:23:35.911455] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59138 ] 00:05:35.238 [2024-11-15 10:23:36.060850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.497 [2024-11-15 10:23:36.113953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.497 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:36.433 [2024-11-15T10:23:37.286Z] ====================================== 00:05:36.433 [2024-11-15T10:23:37.286Z] busy:2210192976 (cyc) 00:05:36.433 [2024-11-15T10:23:37.286Z] total_run_count: 320000 00:05:36.433 [2024-11-15T10:23:37.286Z] tsc_hz: 2200000000 (cyc) 00:05:36.433 [2024-11-15T10:23:37.286Z] ====================================== 00:05:36.433 [2024-11-15T10:23:37.286Z] poller_cost: 6906 (cyc), 3139 (nsec) 00:05:36.433 00:05:36.433 real 0m1.278s 00:05:36.433 user 0m1.120s 00:05:36.433 sys 0m0.047s 00:05:36.433 10:23:37 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:36.433 10:23:37 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:36.433 ************************************ 00:05:36.433 END TEST thread_poller_perf 00:05:36.433 ************************************ 00:05:36.433 10:23:37 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:36.433 10:23:37 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:05:36.433 10:23:37 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:36.433 10:23:37 thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.433 ************************************ 00:05:36.433 START TEST thread_poller_perf 00:05:36.433 ************************************ 00:05:36.433 10:23:37 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:36.433 [2024-11-15 10:23:37.250325] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:05:36.433 [2024-11-15 10:23:37.250444] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59168 ] 00:05:36.692 [2024-11-15 10:23:37.392020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.692 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:36.693 [2024-11-15 10:23:37.437985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.071 [2024-11-15T10:23:38.924Z] ====================================== 00:05:38.071 [2024-11-15T10:23:38.924Z] busy:2203143518 (cyc) 00:05:38.071 [2024-11-15T10:23:38.924Z] total_run_count: 4609000 00:05:38.071 [2024-11-15T10:23:38.924Z] tsc_hz: 2200000000 (cyc) 00:05:38.071 [2024-11-15T10:23:38.924Z] ====================================== 00:05:38.071 [2024-11-15T10:23:38.924Z] poller_cost: 478 (cyc), 217 (nsec) 00:05:38.071 00:05:38.071 real 0m1.264s 00:05:38.071 user 0m1.108s 00:05:38.071 sys 0m0.048s 00:05:38.071 10:23:38 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:38.071 10:23:38 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:38.071 ************************************ 00:05:38.071 END TEST thread_poller_perf 00:05:38.071 ************************************ 00:05:38.071 10:23:38 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:38.071 ************************************ 00:05:38.071 END TEST thread 00:05:38.071 ************************************ 00:05:38.071 00:05:38.071 real 0m2.853s 00:05:38.071 user 0m2.382s 00:05:38.071 sys 0m0.246s 00:05:38.071 10:23:38 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:38.071 10:23:38 thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.071 10:23:38 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:38.071 10:23:38 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:38.071 10:23:38 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:38.071 10:23:38 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:38.071 10:23:38 -- common/autotest_common.sh@10 -- # set +x 00:05:38.071 ************************************ 00:05:38.071 START TEST app_cmdline 00:05:38.071 ************************************ 00:05:38.071 10:23:38 app_cmdline -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:38.071 * Looking for test storage... 00:05:38.071 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:38.071 10:23:38 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:38.071 10:23:38 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:05:38.071 10:23:38 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:38.071 10:23:38 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:38.071 10:23:38 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:38.071 10:23:38 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:38.071 10:23:38 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:38.071 10:23:38 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:38.071 10:23:38 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:38.071 10:23:38 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:38.071 10:23:38 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:38.071 10:23:38 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:38.071 10:23:38 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:38.071 10:23:38 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:38.071 10:23:38 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:38.071 10:23:38 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:38.071 10:23:38 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:38.071 10:23:38 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:38.071 10:23:38 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:38.071 10:23:38 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:38.071 10:23:38 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:38.071 10:23:38 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:38.071 10:23:38 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:38.071 10:23:38 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:38.071 10:23:38 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:38.071 10:23:38 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:38.071 10:23:38 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:38.071 10:23:38 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:38.071 10:23:38 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:38.071 10:23:38 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:38.071 10:23:38 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:38.071 10:23:38 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:38.071 10:23:38 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:38.071 10:23:38 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:38.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.071 --rc genhtml_branch_coverage=1 00:05:38.071 --rc genhtml_function_coverage=1 00:05:38.071 --rc genhtml_legend=1 00:05:38.071 --rc geninfo_all_blocks=1 00:05:38.071 --rc geninfo_unexecuted_blocks=1 00:05:38.071 00:05:38.071 ' 00:05:38.071 10:23:38 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:38.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.071 --rc genhtml_branch_coverage=1 00:05:38.071 --rc genhtml_function_coverage=1 00:05:38.071 --rc genhtml_legend=1 00:05:38.071 --rc geninfo_all_blocks=1 00:05:38.071 --rc geninfo_unexecuted_blocks=1 00:05:38.071 00:05:38.071 ' 00:05:38.071 10:23:38 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:38.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.071 --rc genhtml_branch_coverage=1 00:05:38.071 --rc genhtml_function_coverage=1 00:05:38.071 --rc genhtml_legend=1 00:05:38.071 --rc geninfo_all_blocks=1 00:05:38.071 --rc geninfo_unexecuted_blocks=1 00:05:38.071 00:05:38.071 ' 00:05:38.071 10:23:38 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:38.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.071 --rc genhtml_branch_coverage=1 00:05:38.071 --rc genhtml_function_coverage=1 00:05:38.071 --rc genhtml_legend=1 00:05:38.071 --rc geninfo_all_blocks=1 00:05:38.071 --rc geninfo_unexecuted_blocks=1 00:05:38.071 00:05:38.071 ' 00:05:38.071 10:23:38 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:38.071 10:23:38 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59256 00:05:38.071 10:23:38 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59256 00:05:38.071 10:23:38 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:38.071 10:23:38 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 59256 ']' 00:05:38.071 10:23:38 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.071 10:23:38 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:38.071 10:23:38 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.071 10:23:38 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:38.071 10:23:38 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:38.071 [2024-11-15 10:23:38.860902] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:05:38.071 [2024-11-15 10:23:38.861300] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59256 ] 00:05:38.331 [2024-11-15 10:23:39.005458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.331 [2024-11-15 10:23:39.075554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.331 [2024-11-15 10:23:39.160159] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:38.590 10:23:39 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:38.590 10:23:39 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:05:38.590 10:23:39 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:05:38.850 { 00:05:38.850 "version": "SPDK v25.01-pre git sha1 4b2d483c6", 00:05:38.850 "fields": { 00:05:38.850 "major": 25, 00:05:38.850 "minor": 1, 00:05:38.850 "patch": 0, 00:05:38.850 "suffix": "-pre", 00:05:38.850 "commit": "4b2d483c6" 00:05:38.850 } 00:05:38.850 } 00:05:38.850 10:23:39 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:38.850 10:23:39 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:38.850 10:23:39 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:38.850 10:23:39 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:38.850 10:23:39 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:38.850 10:23:39 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:38.850 10:23:39 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:38.850 10:23:39 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.850 10:23:39 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:38.850 10:23:39 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.109 10:23:39 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:39.109 10:23:39 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:39.109 10:23:39 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:39.109 10:23:39 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:05:39.109 10:23:39 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:39.109 10:23:39 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:39.109 10:23:39 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:39.109 10:23:39 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:39.109 10:23:39 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:39.109 10:23:39 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:39.109 10:23:39 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:39.109 10:23:39 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:39.109 10:23:39 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:05:39.109 10:23:39 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:39.368 request: 00:05:39.368 { 00:05:39.368 "method": "env_dpdk_get_mem_stats", 00:05:39.368 "req_id": 1 00:05:39.368 } 00:05:39.368 Got JSON-RPC error response 00:05:39.368 response: 00:05:39.368 { 00:05:39.368 "code": -32601, 00:05:39.368 "message": "Method not found" 00:05:39.368 } 00:05:39.368 10:23:40 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:05:39.368 10:23:40 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:39.368 10:23:40 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:39.368 10:23:40 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:39.368 10:23:40 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59256 00:05:39.368 10:23:40 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 59256 ']' 00:05:39.368 10:23:40 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 59256 00:05:39.368 10:23:40 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:05:39.368 10:23:40 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:39.368 10:23:40 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59256 00:05:39.368 killing process with pid 59256 00:05:39.368 10:23:40 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:39.368 10:23:40 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:39.368 10:23:40 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59256' 00:05:39.368 10:23:40 app_cmdline -- common/autotest_common.sh@971 -- # kill 59256 00:05:39.368 10:23:40 app_cmdline -- common/autotest_common.sh@976 -- # wait 59256 00:05:39.936 ************************************ 00:05:39.936 END TEST app_cmdline 00:05:39.936 ************************************ 00:05:39.936 00:05:39.936 real 0m1.920s 00:05:39.936 user 0m2.321s 00:05:39.936 sys 0m0.519s 00:05:39.936 10:23:40 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:39.936 10:23:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:39.936 10:23:40 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:39.936 10:23:40 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:39.936 10:23:40 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:39.936 10:23:40 -- common/autotest_common.sh@10 -- # set +x 00:05:39.936 ************************************ 00:05:39.936 START TEST version 00:05:39.936 ************************************ 00:05:39.936 10:23:40 version -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:39.936 * Looking for test storage... 00:05:39.936 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:39.936 10:23:40 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:39.936 10:23:40 version -- common/autotest_common.sh@1691 -- # lcov --version 00:05:39.936 10:23:40 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:39.936 10:23:40 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:39.936 10:23:40 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:39.936 10:23:40 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:39.936 10:23:40 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:39.936 10:23:40 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:39.936 10:23:40 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:39.936 10:23:40 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:39.936 10:23:40 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:39.936 10:23:40 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:39.936 10:23:40 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:39.936 10:23:40 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:39.937 10:23:40 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:39.937 10:23:40 version -- scripts/common.sh@344 -- # case "$op" in 00:05:39.937 10:23:40 version -- scripts/common.sh@345 -- # : 1 00:05:39.937 10:23:40 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:39.937 10:23:40 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:39.937 10:23:40 version -- scripts/common.sh@365 -- # decimal 1 00:05:39.937 10:23:40 version -- scripts/common.sh@353 -- # local d=1 00:05:39.937 10:23:40 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:39.937 10:23:40 version -- scripts/common.sh@355 -- # echo 1 00:05:39.937 10:23:40 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:39.937 10:23:40 version -- scripts/common.sh@366 -- # decimal 2 00:05:39.937 10:23:40 version -- scripts/common.sh@353 -- # local d=2 00:05:39.937 10:23:40 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:39.937 10:23:40 version -- scripts/common.sh@355 -- # echo 2 00:05:39.937 10:23:40 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:39.937 10:23:40 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:39.937 10:23:40 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:39.937 10:23:40 version -- scripts/common.sh@368 -- # return 0 00:05:39.937 10:23:40 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:39.937 10:23:40 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:39.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.937 --rc genhtml_branch_coverage=1 00:05:39.937 --rc genhtml_function_coverage=1 00:05:39.937 --rc genhtml_legend=1 00:05:39.937 --rc geninfo_all_blocks=1 00:05:39.937 --rc geninfo_unexecuted_blocks=1 00:05:39.937 00:05:39.937 ' 00:05:39.937 10:23:40 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:39.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.937 --rc genhtml_branch_coverage=1 00:05:39.937 --rc genhtml_function_coverage=1 00:05:39.937 --rc genhtml_legend=1 00:05:39.937 --rc geninfo_all_blocks=1 00:05:39.937 --rc geninfo_unexecuted_blocks=1 00:05:39.937 00:05:39.937 ' 00:05:39.937 10:23:40 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:39.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.937 --rc genhtml_branch_coverage=1 00:05:39.937 --rc genhtml_function_coverage=1 00:05:39.937 --rc genhtml_legend=1 00:05:39.937 --rc geninfo_all_blocks=1 00:05:39.937 --rc geninfo_unexecuted_blocks=1 00:05:39.937 00:05:39.937 ' 00:05:39.937 10:23:40 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:39.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.937 --rc genhtml_branch_coverage=1 00:05:39.937 --rc genhtml_function_coverage=1 00:05:39.937 --rc genhtml_legend=1 00:05:39.937 --rc geninfo_all_blocks=1 00:05:39.937 --rc geninfo_unexecuted_blocks=1 00:05:39.937 00:05:39.937 ' 00:05:39.937 10:23:40 version -- app/version.sh@17 -- # get_header_version major 00:05:39.937 10:23:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:39.937 10:23:40 version -- app/version.sh@14 -- # cut -f2 00:05:39.937 10:23:40 version -- app/version.sh@14 -- # tr -d '"' 00:05:39.937 10:23:40 version -- app/version.sh@17 -- # major=25 00:05:39.937 10:23:40 version -- app/version.sh@18 -- # get_header_version minor 00:05:39.937 10:23:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:39.937 10:23:40 version -- app/version.sh@14 -- # cut -f2 00:05:39.937 10:23:40 version -- app/version.sh@14 -- # tr -d '"' 00:05:39.937 10:23:40 version -- app/version.sh@18 -- # minor=1 00:05:39.937 10:23:40 version -- app/version.sh@19 -- # get_header_version patch 00:05:39.937 10:23:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:39.937 10:23:40 version -- app/version.sh@14 -- # cut -f2 00:05:39.937 10:23:40 version -- app/version.sh@14 -- # tr -d '"' 00:05:39.937 10:23:40 version -- app/version.sh@19 -- # patch=0 00:05:39.937 10:23:40 version -- app/version.sh@20 -- # get_header_version suffix 00:05:39.937 10:23:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:39.937 10:23:40 version -- app/version.sh@14 -- # cut -f2 00:05:39.937 10:23:40 version -- app/version.sh@14 -- # tr -d '"' 00:05:40.198 10:23:40 version -- app/version.sh@20 -- # suffix=-pre 00:05:40.198 10:23:40 version -- app/version.sh@22 -- # version=25.1 00:05:40.198 10:23:40 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:40.198 10:23:40 version -- app/version.sh@28 -- # version=25.1rc0 00:05:40.198 10:23:40 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:40.198 10:23:40 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:40.198 10:23:40 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:40.198 10:23:40 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:40.198 00:05:40.198 real 0m0.269s 00:05:40.198 user 0m0.177s 00:05:40.198 sys 0m0.122s 00:05:40.198 ************************************ 00:05:40.198 END TEST version 00:05:40.198 ************************************ 00:05:40.198 10:23:40 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:40.198 10:23:40 version -- common/autotest_common.sh@10 -- # set +x 00:05:40.198 10:23:40 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:40.198 10:23:40 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:40.198 10:23:40 -- spdk/autotest.sh@194 -- # uname -s 00:05:40.198 10:23:40 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:40.198 10:23:40 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:40.198 10:23:40 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:05:40.198 10:23:40 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:05:40.198 10:23:40 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:05:40.198 10:23:40 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:40.198 10:23:40 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:40.198 10:23:40 -- common/autotest_common.sh@10 -- # set +x 00:05:40.198 ************************************ 00:05:40.198 START TEST spdk_dd 00:05:40.198 ************************************ 00:05:40.198 10:23:40 spdk_dd -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:05:40.198 * Looking for test storage... 00:05:40.198 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:40.198 10:23:40 spdk_dd -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:40.198 10:23:40 spdk_dd -- common/autotest_common.sh@1691 -- # lcov --version 00:05:40.198 10:23:40 spdk_dd -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:40.459 10:23:41 spdk_dd -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:40.459 10:23:41 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:40.459 10:23:41 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:40.459 10:23:41 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:40.459 10:23:41 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.459 10:23:41 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:05:40.459 10:23:41 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:05:40.459 10:23:41 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:05:40.459 10:23:41 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:05:40.459 10:23:41 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:05:40.459 10:23:41 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:05:40.459 10:23:41 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:40.459 10:23:41 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:05:40.459 10:23:41 spdk_dd -- scripts/common.sh@345 -- # : 1 00:05:40.459 10:23:41 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:40.459 10:23:41 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.459 10:23:41 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:05:40.459 10:23:41 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:05:40.459 10:23:41 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.459 10:23:41 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:05:40.459 10:23:41 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:05:40.459 10:23:41 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:05:40.459 10:23:41 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:05:40.459 10:23:41 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.459 10:23:41 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:05:40.459 10:23:41 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:05:40.459 10:23:41 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:40.459 10:23:41 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:40.459 10:23:41 spdk_dd -- scripts/common.sh@368 -- # return 0 00:05:40.459 10:23:41 spdk_dd -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.459 10:23:41 spdk_dd -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:40.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.459 --rc genhtml_branch_coverage=1 00:05:40.459 --rc genhtml_function_coverage=1 00:05:40.459 --rc genhtml_legend=1 00:05:40.459 --rc geninfo_all_blocks=1 00:05:40.459 --rc geninfo_unexecuted_blocks=1 00:05:40.459 00:05:40.459 ' 00:05:40.459 10:23:41 spdk_dd -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:40.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.459 --rc genhtml_branch_coverage=1 00:05:40.459 --rc genhtml_function_coverage=1 00:05:40.459 --rc genhtml_legend=1 00:05:40.459 --rc geninfo_all_blocks=1 00:05:40.459 --rc geninfo_unexecuted_blocks=1 00:05:40.459 00:05:40.459 ' 00:05:40.459 10:23:41 spdk_dd -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:40.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.459 --rc genhtml_branch_coverage=1 00:05:40.459 --rc genhtml_function_coverage=1 00:05:40.459 --rc genhtml_legend=1 00:05:40.459 --rc geninfo_all_blocks=1 00:05:40.459 --rc geninfo_unexecuted_blocks=1 00:05:40.459 00:05:40.459 ' 00:05:40.459 10:23:41 spdk_dd -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:40.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.459 --rc genhtml_branch_coverage=1 00:05:40.459 --rc genhtml_function_coverage=1 00:05:40.459 --rc genhtml_legend=1 00:05:40.459 --rc geninfo_all_blocks=1 00:05:40.459 --rc geninfo_unexecuted_blocks=1 00:05:40.459 00:05:40.459 ' 00:05:40.459 10:23:41 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:40.459 10:23:41 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:05:40.459 10:23:41 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:40.459 10:23:41 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:40.459 10:23:41 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:40.459 10:23:41 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.460 10:23:41 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.460 10:23:41 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.460 10:23:41 spdk_dd -- paths/export.sh@5 -- # export PATH 00:05:40.460 10:23:41 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.460 10:23:41 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:40.719 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:40.719 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:40.719 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:40.719 10:23:41 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:05:40.719 10:23:41 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:05:40.719 10:23:41 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:05:40.719 10:23:41 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:05:40.719 10:23:41 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:05:40.719 10:23:41 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:05:40.719 10:23:41 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:05:40.719 10:23:41 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:05:40.719 10:23:41 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:05:40.719 10:23:41 spdk_dd -- scripts/common.sh@233 -- # local class 00:05:40.719 10:23:41 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:05:40.719 10:23:41 spdk_dd -- scripts/common.sh@235 -- # local progif 00:05:40.719 10:23:41 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:05:40.719 10:23:41 spdk_dd -- scripts/common.sh@236 -- # class=01 00:05:40.719 10:23:41 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:05:40.719 10:23:41 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:05:40.719 10:23:41 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:05:40.719 10:23:41 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:05:40.719 10:23:41 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:05:40.719 10:23:41 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:05:40.719 10:23:41 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:05:40.719 10:23:41 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:05:40.719 10:23:41 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:05:40.719 10:23:41 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:05:40.719 10:23:41 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:05:40.719 10:23:41 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:05:40.719 10:23:41 spdk_dd -- scripts/common.sh@18 -- # local i 00:05:40.719 10:23:41 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:05:40.719 10:23:41 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:05:40.719 10:23:41 spdk_dd -- scripts/common.sh@27 -- # return 0 00:05:40.719 10:23:41 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:05:40.719 10:23:41 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:05:40.719 10:23:41 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:05:40.719 10:23:41 spdk_dd -- scripts/common.sh@18 -- # local i 00:05:40.719 10:23:41 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:05:40.719 10:23:41 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:05:40.719 10:23:41 spdk_dd -- scripts/common.sh@27 -- # return 0 00:05:40.719 10:23:41 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:05:40.719 10:23:41 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:05:40.719 10:23:41 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:05:40.719 10:23:41 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:05:40.719 10:23:41 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:05:40.719 10:23:41 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:05:40.719 10:23:41 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:05:40.719 10:23:41 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:05:40.719 10:23:41 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:05:40.719 10:23:41 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:05:40.719 10:23:41 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:05:40.719 10:23:41 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:05:40.719 10:23:41 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:40.719 10:23:41 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:05:40.719 10:23:41 spdk_dd -- dd/common.sh@139 -- # local lib 00:05:40.719 10:23:41 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:05:40.719 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.719 10:23:41 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:40.719 10:23:41 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:05:40.719 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:05:40.719 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.720 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:05:40.720 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.720 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.1 == liburing.so.* ]] 00:05:40.720 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.720 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:05:40.720 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.720 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:05:40.720 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.720 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:05:40.720 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.720 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.15.0 == liburing.so.* ]] 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.7.0 == liburing.so.* ]] 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.1 == liburing.so.* ]] 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.2.0 == liburing.so.* ]] 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.17.0 == liburing.so.* ]] 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.11.0 == liburing.so.* ]] 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.1 == liburing.so.* ]] 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.1 == liburing.so.* ]] 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.981 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:05:40.982 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.982 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:05:40.982 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.982 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:05:40.982 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.982 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:05:40.982 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.982 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:05:40.982 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.982 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:05:40.982 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.982 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:05:40.982 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.982 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:05:40.982 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.982 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:05:40.982 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.982 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:05:40.982 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.982 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:05:40.982 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.982 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:05:40.982 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.982 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:05:40.982 10:23:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.982 10:23:41 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:05:40.982 10:23:41 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:05:40.982 * spdk_dd linked to liburing 00:05:40.982 10:23:41 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:05:40.982 10:23:41 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@23 -- # CONFIG_CET=n 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=y 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@76 -- # CONFIG_FC=n 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:05:40.982 10:23:41 spdk_dd -- common/build_config.sh@90 -- # CONFIG_URING=y 00:05:40.982 10:23:41 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:05:40.982 10:23:41 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:05:40.982 10:23:41 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:05:40.982 10:23:41 spdk_dd -- dd/common.sh@153 -- # return 0 00:05:40.982 10:23:41 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:05:40.982 10:23:41 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:05:40.982 10:23:41 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:05:40.982 10:23:41 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:40.982 10:23:41 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:05:40.982 ************************************ 00:05:40.982 START TEST spdk_dd_basic_rw 00:05:40.982 ************************************ 00:05:40.982 10:23:41 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:05:40.982 * Looking for test storage... 00:05:40.982 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:40.982 10:23:41 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:40.982 10:23:41 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1691 -- # lcov --version 00:05:40.982 10:23:41 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:40.982 10:23:41 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:40.982 10:23:41 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:40.982 10:23:41 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:40.982 10:23:41 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:40.982 10:23:41 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.983 10:23:41 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:05:40.983 10:23:41 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:05:40.983 10:23:41 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:05:40.983 10:23:41 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:05:40.983 10:23:41 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:05:40.983 10:23:41 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:05:40.983 10:23:41 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:40.983 10:23:41 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:05:40.983 10:23:41 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:05:40.983 10:23:41 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:40.983 10:23:41 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.983 10:23:41 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:05:40.983 10:23:41 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:05:40.983 10:23:41 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.983 10:23:41 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:05:40.983 10:23:41 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:05:40.983 10:23:41 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:05:40.983 10:23:41 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:05:40.983 10:23:41 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.983 10:23:41 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:05:40.983 10:23:41 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:05:40.983 10:23:41 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:40.983 10:23:41 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:40.983 10:23:41 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:05:40.983 10:23:41 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.983 10:23:41 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:40.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.983 --rc genhtml_branch_coverage=1 00:05:40.983 --rc genhtml_function_coverage=1 00:05:40.983 --rc genhtml_legend=1 00:05:40.983 --rc geninfo_all_blocks=1 00:05:40.983 --rc geninfo_unexecuted_blocks=1 00:05:40.983 00:05:40.983 ' 00:05:40.983 10:23:41 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:40.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.983 --rc genhtml_branch_coverage=1 00:05:40.983 --rc genhtml_function_coverage=1 00:05:40.983 --rc genhtml_legend=1 00:05:40.983 --rc geninfo_all_blocks=1 00:05:40.983 --rc geninfo_unexecuted_blocks=1 00:05:40.983 00:05:40.983 ' 00:05:40.983 10:23:41 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:40.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.983 --rc genhtml_branch_coverage=1 00:05:40.983 --rc genhtml_function_coverage=1 00:05:40.983 --rc genhtml_legend=1 00:05:40.983 --rc geninfo_all_blocks=1 00:05:40.983 --rc geninfo_unexecuted_blocks=1 00:05:40.983 00:05:40.983 ' 00:05:40.983 10:23:41 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:40.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.983 --rc genhtml_branch_coverage=1 00:05:40.983 --rc genhtml_function_coverage=1 00:05:40.983 --rc genhtml_legend=1 00:05:40.983 --rc geninfo_all_blocks=1 00:05:40.983 --rc geninfo_unexecuted_blocks=1 00:05:40.983 00:05:40.983 ' 00:05:40.983 10:23:41 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:40.983 10:23:41 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:05:40.983 10:23:41 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:40.983 10:23:41 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:40.983 10:23:41 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:40.983 10:23:41 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.983 10:23:41 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.983 10:23:41 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.983 10:23:41 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:05:40.983 10:23:41 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.983 10:23:41 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:05:40.983 10:23:41 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:05:40.983 10:23:41 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:05:40.983 10:23:41 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:05:40.983 10:23:41 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:05:41.244 10:23:41 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:05:41.244 10:23:41 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:05:41.244 10:23:41 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:41.244 10:23:41 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:41.244 10:23:41 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:05:41.244 10:23:41 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:05:41.244 10:23:41 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:05:41.244 10:23:41 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:05:41.245 10:23:42 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:05:41.245 10:23:42 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:05:41.246 10:23:42 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:05:41.246 10:23:42 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:05:41.246 10:23:42 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:05:41.246 10:23:42 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:05:41.246 10:23:42 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:05:41.246 10:23:42 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:41.246 10:23:42 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:05:41.246 10:23:42 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:41.246 10:23:42 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:05:41.246 10:23:42 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:41.246 10:23:42 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:41.246 10:23:42 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:41.246 ************************************ 00:05:41.246 START TEST dd_bs_lt_native_bs 00:05:41.246 ************************************ 00:05:41.246 10:23:42 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1127 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:41.246 10:23:42 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # local es=0 00:05:41.246 10:23:42 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:41.246 10:23:42 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:41.246 10:23:42 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:41.246 10:23:42 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:41.246 10:23:42 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:41.246 10:23:42 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:41.246 10:23:42 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:41.246 10:23:42 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:41.246 10:23:42 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:41.246 10:23:42 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:41.504 { 00:05:41.504 "subsystems": [ 00:05:41.504 { 00:05:41.504 "subsystem": "bdev", 00:05:41.504 "config": [ 00:05:41.504 { 00:05:41.504 "params": { 00:05:41.504 "trtype": "pcie", 00:05:41.504 "traddr": "0000:00:10.0", 00:05:41.504 "name": "Nvme0" 00:05:41.504 }, 00:05:41.504 "method": "bdev_nvme_attach_controller" 00:05:41.504 }, 00:05:41.504 { 00:05:41.504 "method": "bdev_wait_for_examine" 00:05:41.504 } 00:05:41.504 ] 00:05:41.504 } 00:05:41.504 ] 00:05:41.504 } 00:05:41.504 [2024-11-15 10:23:42.123278] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:05:41.504 [2024-11-15 10:23:42.123382] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59600 ] 00:05:41.504 [2024-11-15 10:23:42.279006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.504 [2024-11-15 10:23:42.346988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.763 [2024-11-15 10:23:42.410693] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:41.763 [2024-11-15 10:23:42.524041] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:05:41.763 [2024-11-15 10:23:42.524151] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:42.021 [2024-11-15 10:23:42.659659] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:05:42.021 10:23:42 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # es=234 00:05:42.021 10:23:42 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:42.021 10:23:42 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@662 -- # es=106 00:05:42.021 10:23:42 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # case "$es" in 00:05:42.021 10:23:42 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@670 -- # es=1 00:05:42.021 10:23:42 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:42.021 00:05:42.021 real 0m0.668s 00:05:42.021 user 0m0.455s 00:05:42.021 sys 0m0.171s 00:05:42.021 10:23:42 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:42.021 10:23:42 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:05:42.021 ************************************ 00:05:42.021 END TEST dd_bs_lt_native_bs 00:05:42.021 ************************************ 00:05:42.021 10:23:42 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:05:42.021 10:23:42 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:42.021 10:23:42 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:42.021 10:23:42 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:42.021 ************************************ 00:05:42.021 START TEST dd_rw 00:05:42.021 ************************************ 00:05:42.021 10:23:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1127 -- # basic_rw 4096 00:05:42.021 10:23:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:05:42.021 10:23:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:05:42.021 10:23:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:05:42.021 10:23:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:05:42.021 10:23:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:05:42.021 10:23:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:05:42.021 10:23:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:05:42.021 10:23:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:05:42.021 10:23:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:05:42.021 10:23:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:05:42.021 10:23:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:05:42.021 10:23:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:42.021 10:23:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:05:42.021 10:23:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:05:42.021 10:23:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:05:42.021 10:23:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:05:42.021 10:23:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:42.021 10:23:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:42.588 10:23:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:05:42.588 10:23:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:42.588 10:23:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:42.588 10:23:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:42.588 [2024-11-15 10:23:43.437525] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:05:42.588 [2024-11-15 10:23:43.437646] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59631 ] 00:05:42.847 { 00:05:42.847 "subsystems": [ 00:05:42.847 { 00:05:42.847 "subsystem": "bdev", 00:05:42.847 "config": [ 00:05:42.847 { 00:05:42.847 "params": { 00:05:42.847 "trtype": "pcie", 00:05:42.847 "traddr": "0000:00:10.0", 00:05:42.847 "name": "Nvme0" 00:05:42.847 }, 00:05:42.847 "method": "bdev_nvme_attach_controller" 00:05:42.847 }, 00:05:42.847 { 00:05:42.847 "method": "bdev_wait_for_examine" 00:05:42.847 } 00:05:42.847 ] 00:05:42.847 } 00:05:42.847 ] 00:05:42.847 } 00:05:42.847 [2024-11-15 10:23:43.585189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.847 [2024-11-15 10:23:43.640450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.847 [2024-11-15 10:23:43.693307] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:43.105  [2024-11-15T10:23:44.217Z] Copying: 60/60 [kB] (average 19 MBps) 00:05:43.364 00:05:43.364 10:23:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:05:43.364 10:23:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:43.364 10:23:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:43.364 10:23:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:43.364 [2024-11-15 10:23:44.043859] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:05:43.364 [2024-11-15 10:23:44.043985] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59650 ] 00:05:43.364 { 00:05:43.364 "subsystems": [ 00:05:43.364 { 00:05:43.364 "subsystem": "bdev", 00:05:43.364 "config": [ 00:05:43.364 { 00:05:43.364 "params": { 00:05:43.364 "trtype": "pcie", 00:05:43.364 "traddr": "0000:00:10.0", 00:05:43.364 "name": "Nvme0" 00:05:43.364 }, 00:05:43.364 "method": "bdev_nvme_attach_controller" 00:05:43.364 }, 00:05:43.364 { 00:05:43.364 "method": "bdev_wait_for_examine" 00:05:43.364 } 00:05:43.364 ] 00:05:43.364 } 00:05:43.364 ] 00:05:43.364 } 00:05:43.364 [2024-11-15 10:23:44.185564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.623 [2024-11-15 10:23:44.231743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.623 [2024-11-15 10:23:44.285698] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:43.623  [2024-11-15T10:23:44.735Z] Copying: 60/60 [kB] (average 19 MBps) 00:05:43.882 00:05:43.882 10:23:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:43.882 10:23:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:05:43.882 10:23:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:43.882 10:23:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:43.882 10:23:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:05:43.882 10:23:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:43.882 10:23:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:43.882 10:23:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:43.882 10:23:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:43.882 10:23:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:43.882 10:23:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:43.882 [2024-11-15 10:23:44.648322] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:05:43.882 [2024-11-15 10:23:44.648433] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59660 ] 00:05:43.882 { 00:05:43.882 "subsystems": [ 00:05:43.882 { 00:05:43.882 "subsystem": "bdev", 00:05:43.882 "config": [ 00:05:43.882 { 00:05:43.882 "params": { 00:05:43.882 "trtype": "pcie", 00:05:43.882 "traddr": "0000:00:10.0", 00:05:43.882 "name": "Nvme0" 00:05:43.882 }, 00:05:43.882 "method": "bdev_nvme_attach_controller" 00:05:43.882 }, 00:05:43.882 { 00:05:43.882 "method": "bdev_wait_for_examine" 00:05:43.882 } 00:05:43.882 ] 00:05:43.882 } 00:05:43.882 ] 00:05:43.882 } 00:05:44.140 [2024-11-15 10:23:44.793513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.140 [2024-11-15 10:23:44.834289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.140 [2024-11-15 10:23:44.890176] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:44.399  [2024-11-15T10:23:45.252Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:44.399 00:05:44.399 10:23:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:44.399 10:23:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:05:44.399 10:23:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:05:44.399 10:23:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:05:44.399 10:23:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:05:44.399 10:23:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:44.399 10:23:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:44.965 10:23:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:05:44.965 10:23:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:44.965 10:23:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:44.965 10:23:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:44.965 [2024-11-15 10:23:45.799134] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:05:44.965 [2024-11-15 10:23:45.799220] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59684 ] 00:05:44.965 { 00:05:44.965 "subsystems": [ 00:05:44.965 { 00:05:44.965 "subsystem": "bdev", 00:05:44.965 "config": [ 00:05:44.965 { 00:05:44.965 "params": { 00:05:44.965 "trtype": "pcie", 00:05:44.965 "traddr": "0000:00:10.0", 00:05:44.965 "name": "Nvme0" 00:05:44.965 }, 00:05:44.965 "method": "bdev_nvme_attach_controller" 00:05:44.965 }, 00:05:44.965 { 00:05:44.965 "method": "bdev_wait_for_examine" 00:05:44.965 } 00:05:44.965 ] 00:05:44.965 } 00:05:44.965 ] 00:05:44.965 } 00:05:45.232 [2024-11-15 10:23:45.939724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.232 [2024-11-15 10:23:45.984039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.232 [2024-11-15 10:23:46.035529] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:45.493  [2024-11-15T10:23:46.346Z] Copying: 60/60 [kB] (average 58 MBps) 00:05:45.493 00:05:45.493 10:23:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:05:45.493 10:23:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:45.493 10:23:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:45.493 10:23:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:45.753 [2024-11-15 10:23:46.376544] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:05:45.753 [2024-11-15 10:23:46.376641] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59698 ] 00:05:45.753 { 00:05:45.753 "subsystems": [ 00:05:45.753 { 00:05:45.753 "subsystem": "bdev", 00:05:45.753 "config": [ 00:05:45.753 { 00:05:45.753 "params": { 00:05:45.753 "trtype": "pcie", 00:05:45.753 "traddr": "0000:00:10.0", 00:05:45.753 "name": "Nvme0" 00:05:45.753 }, 00:05:45.753 "method": "bdev_nvme_attach_controller" 00:05:45.753 }, 00:05:45.753 { 00:05:45.753 "method": "bdev_wait_for_examine" 00:05:45.753 } 00:05:45.753 ] 00:05:45.753 } 00:05:45.753 ] 00:05:45.753 } 00:05:45.753 [2024-11-15 10:23:46.521515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.753 [2024-11-15 10:23:46.573936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.012 [2024-11-15 10:23:46.626040] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:46.012  [2024-11-15T10:23:47.124Z] Copying: 60/60 [kB] (average 58 MBps) 00:05:46.271 00:05:46.271 10:23:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:46.271 10:23:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:05:46.271 10:23:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:46.271 10:23:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:46.271 10:23:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:05:46.271 10:23:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:46.271 10:23:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:46.271 10:23:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:46.271 10:23:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:46.271 10:23:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:46.271 10:23:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:46.271 [2024-11-15 10:23:46.972817] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:05:46.271 [2024-11-15 10:23:46.972941] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59719 ] 00:05:46.271 { 00:05:46.271 "subsystems": [ 00:05:46.271 { 00:05:46.271 "subsystem": "bdev", 00:05:46.271 "config": [ 00:05:46.271 { 00:05:46.271 "params": { 00:05:46.271 "trtype": "pcie", 00:05:46.271 "traddr": "0000:00:10.0", 00:05:46.271 "name": "Nvme0" 00:05:46.271 }, 00:05:46.271 "method": "bdev_nvme_attach_controller" 00:05:46.271 }, 00:05:46.271 { 00:05:46.271 "method": "bdev_wait_for_examine" 00:05:46.271 } 00:05:46.271 ] 00:05:46.271 } 00:05:46.271 ] 00:05:46.271 } 00:05:46.271 [2024-11-15 10:23:47.120851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.530 [2024-11-15 10:23:47.166034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.530 [2024-11-15 10:23:47.217615] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:46.530  [2024-11-15T10:23:47.642Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:05:46.789 00:05:46.789 10:23:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:05:46.789 10:23:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:46.789 10:23:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:05:46.789 10:23:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:05:46.789 10:23:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:05:46.789 10:23:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:05:46.789 10:23:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:46.789 10:23:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:47.357 10:23:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:05:47.357 10:23:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:47.357 10:23:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:47.357 10:23:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:47.357 [2024-11-15 10:23:48.117052] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:05:47.357 [2024-11-15 10:23:48.117186] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59738 ] 00:05:47.357 { 00:05:47.357 "subsystems": [ 00:05:47.357 { 00:05:47.357 "subsystem": "bdev", 00:05:47.357 "config": [ 00:05:47.357 { 00:05:47.357 "params": { 00:05:47.357 "trtype": "pcie", 00:05:47.357 "traddr": "0000:00:10.0", 00:05:47.357 "name": "Nvme0" 00:05:47.357 }, 00:05:47.357 "method": "bdev_nvme_attach_controller" 00:05:47.357 }, 00:05:47.357 { 00:05:47.357 "method": "bdev_wait_for_examine" 00:05:47.357 } 00:05:47.357 ] 00:05:47.357 } 00:05:47.357 ] 00:05:47.357 } 00:05:47.616 [2024-11-15 10:23:48.257861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.616 [2024-11-15 10:23:48.320214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.616 [2024-11-15 10:23:48.372358] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:47.874  [2024-11-15T10:23:48.727Z] Copying: 56/56 [kB] (average 27 MBps) 00:05:47.874 00:05:47.874 10:23:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:05:47.874 10:23:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:47.874 10:23:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:47.874 10:23:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:48.133 [2024-11-15 10:23:48.745841] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:05:48.133 [2024-11-15 10:23:48.745980] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59746 ] 00:05:48.133 { 00:05:48.133 "subsystems": [ 00:05:48.133 { 00:05:48.133 "subsystem": "bdev", 00:05:48.133 "config": [ 00:05:48.133 { 00:05:48.133 "params": { 00:05:48.133 "trtype": "pcie", 00:05:48.133 "traddr": "0000:00:10.0", 00:05:48.133 "name": "Nvme0" 00:05:48.133 }, 00:05:48.133 "method": "bdev_nvme_attach_controller" 00:05:48.133 }, 00:05:48.133 { 00:05:48.133 "method": "bdev_wait_for_examine" 00:05:48.133 } 00:05:48.133 ] 00:05:48.133 } 00:05:48.133 ] 00:05:48.133 } 00:05:48.133 [2024-11-15 10:23:48.894097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.133 [2024-11-15 10:23:48.962598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.391 [2024-11-15 10:23:49.022609] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:48.391  [2024-11-15T10:23:49.503Z] Copying: 56/56 [kB] (average 27 MBps) 00:05:48.650 00:05:48.650 10:23:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:48.650 10:23:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:05:48.650 10:23:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:48.650 10:23:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:48.650 10:23:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:05:48.650 10:23:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:48.650 10:23:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:48.650 10:23:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:48.650 10:23:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:48.650 10:23:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:48.650 10:23:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:48.650 [2024-11-15 10:23:49.373298] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:05:48.650 [2024-11-15 10:23:49.373407] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59767 ] 00:05:48.650 { 00:05:48.650 "subsystems": [ 00:05:48.650 { 00:05:48.650 "subsystem": "bdev", 00:05:48.650 "config": [ 00:05:48.650 { 00:05:48.650 "params": { 00:05:48.650 "trtype": "pcie", 00:05:48.650 "traddr": "0000:00:10.0", 00:05:48.650 "name": "Nvme0" 00:05:48.650 }, 00:05:48.650 "method": "bdev_nvme_attach_controller" 00:05:48.650 }, 00:05:48.650 { 00:05:48.650 "method": "bdev_wait_for_examine" 00:05:48.650 } 00:05:48.650 ] 00:05:48.650 } 00:05:48.650 ] 00:05:48.650 } 00:05:48.910 [2024-11-15 10:23:49.512658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.910 [2024-11-15 10:23:49.572612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.910 [2024-11-15 10:23:49.627870] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:48.910  [2024-11-15T10:23:50.030Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:49.177 00:05:49.177 10:23:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:49.177 10:23:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:05:49.177 10:23:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:05:49.177 10:23:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:05:49.177 10:23:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:05:49.177 10:23:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:49.177 10:23:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:49.744 10:23:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:05:49.744 10:23:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:49.744 10:23:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:49.744 10:23:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:49.744 [2024-11-15 10:23:50.516801] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:05:49.744 [2024-11-15 10:23:50.516945] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59786 ] 00:05:49.744 { 00:05:49.744 "subsystems": [ 00:05:49.744 { 00:05:49.744 "subsystem": "bdev", 00:05:49.744 "config": [ 00:05:49.744 { 00:05:49.744 "params": { 00:05:49.744 "trtype": "pcie", 00:05:49.744 "traddr": "0000:00:10.0", 00:05:49.744 "name": "Nvme0" 00:05:49.744 }, 00:05:49.744 "method": "bdev_nvme_attach_controller" 00:05:49.744 }, 00:05:49.744 { 00:05:49.744 "method": "bdev_wait_for_examine" 00:05:49.744 } 00:05:49.744 ] 00:05:49.744 } 00:05:49.744 ] 00:05:49.744 } 00:05:50.003 [2024-11-15 10:23:50.667250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.003 [2024-11-15 10:23:50.714049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.003 [2024-11-15 10:23:50.767461] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:50.262  [2024-11-15T10:23:51.115Z] Copying: 56/56 [kB] (average 54 MBps) 00:05:50.262 00:05:50.262 10:23:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:05:50.262 10:23:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:50.262 10:23:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:50.262 10:23:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:50.520 [2024-11-15 10:23:51.122588] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:05:50.520 [2024-11-15 10:23:51.122699] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59805 ] 00:05:50.520 { 00:05:50.520 "subsystems": [ 00:05:50.520 { 00:05:50.520 "subsystem": "bdev", 00:05:50.520 "config": [ 00:05:50.520 { 00:05:50.520 "params": { 00:05:50.520 "trtype": "pcie", 00:05:50.520 "traddr": "0000:00:10.0", 00:05:50.520 "name": "Nvme0" 00:05:50.520 }, 00:05:50.520 "method": "bdev_nvme_attach_controller" 00:05:50.520 }, 00:05:50.520 { 00:05:50.520 "method": "bdev_wait_for_examine" 00:05:50.520 } 00:05:50.520 ] 00:05:50.520 } 00:05:50.520 ] 00:05:50.520 } 00:05:50.520 [2024-11-15 10:23:51.265951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.520 [2024-11-15 10:23:51.313038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.520 [2024-11-15 10:23:51.367245] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:50.779  [2024-11-15T10:23:51.890Z] Copying: 56/56 [kB] (average 54 MBps) 00:05:51.037 00:05:51.037 10:23:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:51.037 10:23:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:05:51.037 10:23:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:51.037 10:23:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:51.037 10:23:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:05:51.037 10:23:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:51.037 10:23:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:51.037 10:23:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:51.037 10:23:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:51.037 10:23:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:51.037 10:23:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:51.037 { 00:05:51.037 "subsystems": [ 00:05:51.037 { 00:05:51.037 "subsystem": "bdev", 00:05:51.037 "config": [ 00:05:51.037 { 00:05:51.037 "params": { 00:05:51.037 "trtype": "pcie", 00:05:51.037 "traddr": "0000:00:10.0", 00:05:51.037 "name": "Nvme0" 00:05:51.037 }, 00:05:51.037 "method": "bdev_nvme_attach_controller" 00:05:51.037 }, 00:05:51.037 { 00:05:51.037 "method": "bdev_wait_for_examine" 00:05:51.037 } 00:05:51.037 ] 00:05:51.037 } 00:05:51.038 ] 00:05:51.038 } 00:05:51.038 [2024-11-15 10:23:51.723249] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:05:51.038 [2024-11-15 10:23:51.723357] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59815 ] 00:05:51.038 [2024-11-15 10:23:51.869004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.296 [2024-11-15 10:23:51.912779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.296 [2024-11-15 10:23:51.968300] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:51.296  [2024-11-15T10:23:52.407Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:05:51.554 00:05:51.554 10:23:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:05:51.554 10:23:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:51.554 10:23:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:05:51.554 10:23:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:05:51.554 10:23:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:05:51.554 10:23:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:05:51.554 10:23:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:51.554 10:23:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:52.122 10:23:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:05:52.122 10:23:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:52.122 10:23:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:52.122 10:23:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:52.122 [2024-11-15 10:23:52.783364] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:05:52.122 [2024-11-15 10:23:52.783467] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59834 ] 00:05:52.122 { 00:05:52.122 "subsystems": [ 00:05:52.122 { 00:05:52.122 "subsystem": "bdev", 00:05:52.122 "config": [ 00:05:52.122 { 00:05:52.122 "params": { 00:05:52.122 "trtype": "pcie", 00:05:52.122 "traddr": "0000:00:10.0", 00:05:52.122 "name": "Nvme0" 00:05:52.122 }, 00:05:52.122 "method": "bdev_nvme_attach_controller" 00:05:52.122 }, 00:05:52.122 { 00:05:52.122 "method": "bdev_wait_for_examine" 00:05:52.122 } 00:05:52.122 ] 00:05:52.122 } 00:05:52.122 ] 00:05:52.122 } 00:05:52.122 [2024-11-15 10:23:52.929429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.381 [2024-11-15 10:23:52.974985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.381 [2024-11-15 10:23:53.030846] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:52.381  [2024-11-15T10:23:53.493Z] Copying: 48/48 [kB] (average 46 MBps) 00:05:52.640 00:05:52.640 10:23:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:05:52.640 10:23:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:52.640 10:23:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:52.640 10:23:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:52.640 [2024-11-15 10:23:53.373738] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:05:52.640 [2024-11-15 10:23:53.373855] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59853 ] 00:05:52.640 { 00:05:52.640 "subsystems": [ 00:05:52.640 { 00:05:52.640 "subsystem": "bdev", 00:05:52.640 "config": [ 00:05:52.640 { 00:05:52.640 "params": { 00:05:52.640 "trtype": "pcie", 00:05:52.640 "traddr": "0000:00:10.0", 00:05:52.640 "name": "Nvme0" 00:05:52.640 }, 00:05:52.640 "method": "bdev_nvme_attach_controller" 00:05:52.640 }, 00:05:52.640 { 00:05:52.640 "method": "bdev_wait_for_examine" 00:05:52.640 } 00:05:52.640 ] 00:05:52.640 } 00:05:52.640 ] 00:05:52.640 } 00:05:52.898 [2024-11-15 10:23:53.510797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.898 [2024-11-15 10:23:53.556296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.898 [2024-11-15 10:23:53.609363] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:52.898  [2024-11-15T10:23:54.010Z] Copying: 48/48 [kB] (average 46 MBps) 00:05:53.157 00:05:53.157 10:23:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:53.157 10:23:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:05:53.157 10:23:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:53.157 10:23:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:53.157 10:23:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:05:53.157 10:23:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:53.157 10:23:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:53.157 10:23:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:53.157 10:23:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:53.157 10:23:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:53.157 10:23:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:53.157 [2024-11-15 10:23:53.952708] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:05:53.157 [2024-11-15 10:23:53.952814] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59863 ] 00:05:53.157 { 00:05:53.157 "subsystems": [ 00:05:53.157 { 00:05:53.157 "subsystem": "bdev", 00:05:53.157 "config": [ 00:05:53.157 { 00:05:53.157 "params": { 00:05:53.157 "trtype": "pcie", 00:05:53.157 "traddr": "0000:00:10.0", 00:05:53.157 "name": "Nvme0" 00:05:53.158 }, 00:05:53.158 "method": "bdev_nvme_attach_controller" 00:05:53.158 }, 00:05:53.158 { 00:05:53.158 "method": "bdev_wait_for_examine" 00:05:53.158 } 00:05:53.158 ] 00:05:53.158 } 00:05:53.158 ] 00:05:53.158 } 00:05:53.417 [2024-11-15 10:23:54.093787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.417 [2024-11-15 10:23:54.162450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.417 [2024-11-15 10:23:54.221414] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:53.676  [2024-11-15T10:23:54.789Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:53.936 00:05:53.936 10:23:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:53.936 10:23:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:05:53.936 10:23:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:05:53.936 10:23:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:05:53.936 10:23:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:05:53.936 10:23:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:53.936 10:23:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:54.194 10:23:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:05:54.194 10:23:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:54.194 10:23:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:54.194 10:23:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:54.454 [2024-11-15 10:23:55.056386] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:05:54.454 [2024-11-15 10:23:55.056550] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59882 ] 00:05:54.454 { 00:05:54.454 "subsystems": [ 00:05:54.454 { 00:05:54.454 "subsystem": "bdev", 00:05:54.454 "config": [ 00:05:54.454 { 00:05:54.454 "params": { 00:05:54.454 "trtype": "pcie", 00:05:54.454 "traddr": "0000:00:10.0", 00:05:54.454 "name": "Nvme0" 00:05:54.454 }, 00:05:54.454 "method": "bdev_nvme_attach_controller" 00:05:54.454 }, 00:05:54.454 { 00:05:54.454 "method": "bdev_wait_for_examine" 00:05:54.454 } 00:05:54.454 ] 00:05:54.454 } 00:05:54.454 ] 00:05:54.454 } 00:05:54.454 [2024-11-15 10:23:55.205716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.454 [2024-11-15 10:23:55.263441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.713 [2024-11-15 10:23:55.320682] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:54.713  [2024-11-15T10:23:55.825Z] Copying: 48/48 [kB] (average 46 MBps) 00:05:54.972 00:05:54.972 10:23:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:05:54.972 10:23:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:54.972 10:23:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:54.972 10:23:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:54.972 [2024-11-15 10:23:55.688760] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:05:54.972 [2024-11-15 10:23:55.688895] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59901 ] 00:05:54.972 { 00:05:54.972 "subsystems": [ 00:05:54.972 { 00:05:54.972 "subsystem": "bdev", 00:05:54.972 "config": [ 00:05:54.972 { 00:05:54.972 "params": { 00:05:54.972 "trtype": "pcie", 00:05:54.972 "traddr": "0000:00:10.0", 00:05:54.972 "name": "Nvme0" 00:05:54.972 }, 00:05:54.972 "method": "bdev_nvme_attach_controller" 00:05:54.972 }, 00:05:54.972 { 00:05:54.972 "method": "bdev_wait_for_examine" 00:05:54.972 } 00:05:54.972 ] 00:05:54.972 } 00:05:54.972 ] 00:05:54.972 } 00:05:55.231 [2024-11-15 10:23:55.837317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.231 [2024-11-15 10:23:55.901669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.231 [2024-11-15 10:23:55.954628] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:55.231  [2024-11-15T10:23:56.343Z] Copying: 48/48 [kB] (average 46 MBps) 00:05:55.490 00:05:55.490 10:23:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:55.490 10:23:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:05:55.490 10:23:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:55.490 10:23:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:55.490 10:23:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:05:55.490 10:23:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:55.490 10:23:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:55.490 10:23:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:55.490 10:23:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:55.490 10:23:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:55.490 10:23:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:55.490 { 00:05:55.490 "subsystems": [ 00:05:55.490 { 00:05:55.490 "subsystem": "bdev", 00:05:55.490 "config": [ 00:05:55.490 { 00:05:55.490 "params": { 00:05:55.490 "trtype": "pcie", 00:05:55.490 "traddr": "0000:00:10.0", 00:05:55.490 "name": "Nvme0" 00:05:55.490 }, 00:05:55.490 "method": "bdev_nvme_attach_controller" 00:05:55.490 }, 00:05:55.490 { 00:05:55.490 "method": "bdev_wait_for_examine" 00:05:55.490 } 00:05:55.490 ] 00:05:55.490 } 00:05:55.490 ] 00:05:55.490 } 00:05:55.490 [2024-11-15 10:23:56.332210] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:05:55.490 [2024-11-15 10:23:56.332312] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59922 ] 00:05:55.749 [2024-11-15 10:23:56.479372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.749 [2024-11-15 10:23:56.542100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.749 [2024-11-15 10:23:56.597178] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:56.008  [2024-11-15T10:23:57.121Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:56.268 00:05:56.268 00:05:56.268 real 0m14.126s 00:05:56.268 user 0m10.309s 00:05:56.268 sys 0m5.392s 00:05:56.268 ************************************ 00:05:56.268 END TEST dd_rw 00:05:56.268 ************************************ 00:05:56.268 10:23:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:56.268 10:23:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:56.268 10:23:56 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:05:56.268 10:23:56 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:56.268 10:23:56 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:56.268 10:23:56 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:56.268 ************************************ 00:05:56.268 START TEST dd_rw_offset 00:05:56.268 ************************************ 00:05:56.268 10:23:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1127 -- # basic_offset 00:05:56.268 10:23:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:05:56.268 10:23:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:05:56.268 10:23:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:05:56.268 10:23:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:05:56.268 10:23:57 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:05:56.268 10:23:57 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=jx0qll67cqlxnfrd454tzc6z47jxsiexaqw4coyaid01vwwi34t2v2ie9bg6tu6cw1yinz49uxpu3by6bhc8u579rl96kncmbjlln6bfjy2k3jw1zhqurrs3mclalynur86ffph0sbcsnkxfmmmap9n8x4ze65f26f6vy97ovhom3mnscydv11vgoe7of8rs8tcm4vcac8sca6vf357myb72t6e6uzs2f9gqtczh1aoa2ril7b0njncfzh8pfthtna651fn6q4ic72vpiq94i8f2qex654j8fcz5z0sxsb0ogpz99mygklpcoar80lsmwcz5kw1s7vmm949io3js4bonc5fwo5y57vicuxkxtygxo3cv48aj935x3w3hnv9pst3vtoqqjekt3afwwmfv1desn1xaavcex8izfnr1jnl3t5pxa559p8yvp4vax1dzy6vsg3wqdmj7hxxabvslzpgr7angd8req7j6nxkqke2les6ii5876a1x2xdp5973jjpbonwyit7z9cuu7e1av13741i9awhn4tb5fr0oyf5v6nbn1zqk5c77uv5mfgboeow6emwt9pezc5au9377qxfbsd5jod3rv3ffrtzkhv9obp8uj5g67zb74tytqe1cpow7u1rfojjom3h41qvlfxm9rnzdadqgwu4brn8zf1evwr0vexbejfjxrv5vslksqow4baqskw0daja77zophrc3205d9rwo9ek4p3vygqm12tsclosvfyfak0cq2vb89k4d8fd4hdxznits7xfiv9mt773mkqi83wtpq9ydepzwh9etwzgydjl19q7b3r5l027skae191dduw9phqtui2357m4ljeps36kg9e8ei1rp75d95v5jlg7udgn2dn4moypi9imsg9w7iz7a1lvuwlk68fjc5o6w517ukd9p4lzv9yqojpsb9wofi2v33k1bwud3t0vawdewbjrh75ppsi1kdod1nilifd7mfmi6f9gjj3vwkcp40ovtsvkwwj66qxm9m914xowhrrkpvjlkea5o4ht9fpnwug4g8muhhp33iw92idi3ebh9k1yaxen3l8a9li0psjbymx5lgh88j8jyw4ockd6mru6tep3428f0vhxq4araglpkqi48ap6fwt2jbq1okf7czjgir7ekdstimqo32xvptyg2kzt1od4vg6d9qyj161kwxc3g8e77hp9pgr3mxxlyk41v9wg6yfcjtlw5kogxjvk8r8emt7389g219p31sju73f3dyfm7hciqwhll6515pab8hm0l01rfssev8b3nib1fs0y2e9i6ecsegb1u1ftm1bdzdnbary9y7ypk96z0wbddy79gxd6r53sscfrq6agsz2qxb72ldnjjf0tj0g1z64hhhfyggi54xsmopxsrf1bmto4o0pfxlw9x95nqgpq8usqmfnxj2ozqdkda6qv1m5gqxnrrpl7w7nmrvv6h0iaq4zmxp7ztuutrkuvkvehjd8oadyiv81m8louc66kq47ihkacm1gjdhlifjgi4554eo58nvojv7jk8vb7vdrcdzrkpddt1z4fxjo4nyd2atf7umyvgyiijo95jq7shi1xhh662xkixfpm8wn7qct8xy0q3ppns84lew8fqaxui9vs8p3qme5ggq1zol26kygi29s5u3efksu4t0i31obyi0cypj1wwi24gqxf0ekv7n6u0t7o7snz2iwlh6mpai4pf0z5o5ztsrct1urh6f6264m02yb0pr7qrhqt8epx77o6z3pmkpck54cj2fx00cr6psz76h4jqjdwl5oslf3tafc8k1jgqw1yegfnlapz12snlwfemo6seb1ysv2bm5qeknoow1xafdplyek4nw1jpmctcr11h681aoru2d42z1r7fu88emgfa3b3oug64lujfmhov9cm835nr4atpy2bqk25l5hg8skfoa4mepqwukqstwai0vrexom5rdajglbmtloqtffwrulwqcuusdpy03h8da1rln6ryumbql8jk3lp60fzj5idn1x80nis072jtcs8kxwu6uxi83g2cdnlqcxh5jyh1cm1zyfe3rqiivaigb9kkf4huctdtdb3dawrvnpd5ku72e3sgdan19s87xtjqhci9pj7x0vqeoubg4xhn4g2cj5hf1uvhyitmmpxvjr77wrgqcsjisr5k27jwi07e8zefc5bzi8nf7weaps199eq45s4pt9vdncfqjr8rzbq0nhz8i4q98q76i3j6joptak30ursrk7ban5pt4hy7j20znk3b6bjtxktzpyckzeh5rw96tsrrfxsyuiks5y2mplvn14b5go3d1d14wt61vlbbwmzww66gnil8lqkcvqf7m95r6jkp58xja0dle4ie6u7tkcw07lz99gr18ozyxlyxyddm27ua38qlw60y6611z15yigpx7w8r74u5clof0c9raheh1ajm3nsqi9ia41rorgl539kvdkzb9lw8uoz6zhwoesz3l88a7aolo1el9cfkjuqp6bkgypwr6kdi4lhj60erw5sx9vqwd874tyv3pg2knysffj2afvzwvdd3tkra1sb1spfzuzhdvd8cvmp64wvrlkc158qxvp6m6sr7e6mv9fzeesn6h18naotihjkxkxci023rx596brzjd5kdaudtg6max2z3ye2b4twqa430j18vhud5j7bxhey12iuwf5zng7hk3dxonal5n34rvtccee7zkiz62sv2kunsy58mlxwmywzppr4z950gu7hka9fpo86kly1ulgeiiyygvyqlkoad7cqhjifpg4cuv0sr3f40rmktxsa5v8tj7amcfenmqzzp172u7mneq73jdgvuumcioup1bixbvfidopk9ubio0szmeaxz847qeo7n0xtr2m7ir0h7hzfu2a5llosvm69qp5yftu7cr8jzzjhfh02emz8yjv0e732f6wsjy3a572szx9of50s3ms68yqzqc00hqqz8c90u5k7rrpb8bvyxsbnez51am0q3obnlab3jc6u4tg6t0udt5xjs78id1ywatif8t1qus2sureswfbk4auh9asqcavsgzlioas9whw10wm6xhlb2lcnfm9t2fr0pttleqta7a4lewdr5t5fvl46mhfivimbu6k0r704uaunncv9sidtl94k0qspfhcypn88bncm71fq49fh4yv5z4pgu8e6pay3b5y6r8uxt6pgq6x2tnzdhno0s8v8ymeg9hxx6bpftxvpcruos5ialj3uk6qt3usutssgl2l7ez1eqmxangd9zlsauywjhn082l66flkt1avgiaal9lzwsig8ov2x7ibd574qe75ujviq6hdt6wh9c1y80xo3d45ap88jwujziv28px5uam1i0hxcpi8a4wuvo244o3eqjwi9nspcfubenvb6fcdfhzefs32t19qnmlakn2j3oadv2naupc9xoqie274pzftaj5yf8xov2qtgvie0ugwrs6ndihqnd522l344e4d20otu1ijufs2t63vnxja62polf0y2kufe5ban9mp1rgtclwihuab2tfmecrxw5il8n3gnf31u12r838fnotny69hl5zl1fkluifl4ts5pffkv3rtai8bbg54lixtgucjgn37udoj65fs4gg6drgpy3ed3t7ajrutlvk7t8emfwqt7mq5g0tyxpea9zswuooas0ey19butalpzxvn0m0t3o6ue00sdjm1nkwqmhiwjhgr9exw2xwm1r53kwx4i4wkirgb0fixwxeyld5y9b5y5c96ls6d9ir0bogag4d7utqivg4tifjqlxynsr5vm0kmscpkvmwnd3ad3enksdh8d67q93458ac6n5tu2bd43vbodzsgjclumq3zxt5nhjp20587ky4v91tr3w5ak51gida37crvw0fgeribrga5msv8rrl5zo902g9k3bv3e9d41uvcdoj9es3wm3nbq6u5jvu8ouecn20316atte49pmt5lrwwxgtykdg5mzzvuiuryumeuupvo77cgccfi9w2bnhjd7bnup8yj1pze7ic60 00:05:56.268 10:23:57 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:05:56.268 10:23:57 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:05:56.268 10:23:57 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:05:56.268 10:23:57 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:05:56.268 { 00:05:56.268 "subsystems": [ 00:05:56.268 { 00:05:56.268 "subsystem": "bdev", 00:05:56.268 "config": [ 00:05:56.268 { 00:05:56.268 "params": { 00:05:56.268 "trtype": "pcie", 00:05:56.268 "traddr": "0000:00:10.0", 00:05:56.268 "name": "Nvme0" 00:05:56.268 }, 00:05:56.268 "method": "bdev_nvme_attach_controller" 00:05:56.268 }, 00:05:56.268 { 00:05:56.268 "method": "bdev_wait_for_examine" 00:05:56.268 } 00:05:56.268 ] 00:05:56.268 } 00:05:56.268 ] 00:05:56.268 } 00:05:56.268 [2024-11-15 10:23:57.070098] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:05:56.268 [2024-11-15 10:23:57.070196] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59947 ] 00:05:56.527 [2024-11-15 10:23:57.218628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.527 [2024-11-15 10:23:57.271073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.527 [2024-11-15 10:23:57.325916] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:56.786  [2024-11-15T10:23:57.639Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:05:56.786 00:05:56.786 10:23:57 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:05:56.786 10:23:57 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:05:56.786 10:23:57 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:05:56.786 10:23:57 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:05:57.047 { 00:05:57.047 "subsystems": [ 00:05:57.047 { 00:05:57.047 "subsystem": "bdev", 00:05:57.047 "config": [ 00:05:57.047 { 00:05:57.047 "params": { 00:05:57.047 "trtype": "pcie", 00:05:57.047 "traddr": "0000:00:10.0", 00:05:57.047 "name": "Nvme0" 00:05:57.047 }, 00:05:57.047 "method": "bdev_nvme_attach_controller" 00:05:57.047 }, 00:05:57.047 { 00:05:57.047 "method": "bdev_wait_for_examine" 00:05:57.047 } 00:05:57.047 ] 00:05:57.047 } 00:05:57.047 ] 00:05:57.047 } 00:05:57.047 [2024-11-15 10:23:57.685273] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:05:57.047 [2024-11-15 10:23:57.685368] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59966 ] 00:05:57.047 [2024-11-15 10:23:57.833301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.047 [2024-11-15 10:23:57.887021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.371 [2024-11-15 10:23:57.942201] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:57.371  [2024-11-15T10:23:58.484Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:05:57.631 00:05:57.631 ************************************ 00:05:57.631 END TEST dd_rw_offset 00:05:57.631 ************************************ 00:05:57.631 10:23:58 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:05:57.632 10:23:58 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ jx0qll67cqlxnfrd454tzc6z47jxsiexaqw4coyaid01vwwi34t2v2ie9bg6tu6cw1yinz49uxpu3by6bhc8u579rl96kncmbjlln6bfjy2k3jw1zhqurrs3mclalynur86ffph0sbcsnkxfmmmap9n8x4ze65f26f6vy97ovhom3mnscydv11vgoe7of8rs8tcm4vcac8sca6vf357myb72t6e6uzs2f9gqtczh1aoa2ril7b0njncfzh8pfthtna651fn6q4ic72vpiq94i8f2qex654j8fcz5z0sxsb0ogpz99mygklpcoar80lsmwcz5kw1s7vmm949io3js4bonc5fwo5y57vicuxkxtygxo3cv48aj935x3w3hnv9pst3vtoqqjekt3afwwmfv1desn1xaavcex8izfnr1jnl3t5pxa559p8yvp4vax1dzy6vsg3wqdmj7hxxabvslzpgr7angd8req7j6nxkqke2les6ii5876a1x2xdp5973jjpbonwyit7z9cuu7e1av13741i9awhn4tb5fr0oyf5v6nbn1zqk5c77uv5mfgboeow6emwt9pezc5au9377qxfbsd5jod3rv3ffrtzkhv9obp8uj5g67zb74tytqe1cpow7u1rfojjom3h41qvlfxm9rnzdadqgwu4brn8zf1evwr0vexbejfjxrv5vslksqow4baqskw0daja77zophrc3205d9rwo9ek4p3vygqm12tsclosvfyfak0cq2vb89k4d8fd4hdxznits7xfiv9mt773mkqi83wtpq9ydepzwh9etwzgydjl19q7b3r5l027skae191dduw9phqtui2357m4ljeps36kg9e8ei1rp75d95v5jlg7udgn2dn4moypi9imsg9w7iz7a1lvuwlk68fjc5o6w517ukd9p4lzv9yqojpsb9wofi2v33k1bwud3t0vawdewbjrh75ppsi1kdod1nilifd7mfmi6f9gjj3vwkcp40ovtsvkwwj66qxm9m914xowhrrkpvjlkea5o4ht9fpnwug4g8muhhp33iw92idi3ebh9k1yaxen3l8a9li0psjbymx5lgh88j8jyw4ockd6mru6tep3428f0vhxq4araglpkqi48ap6fwt2jbq1okf7czjgir7ekdstimqo32xvptyg2kzt1od4vg6d9qyj161kwxc3g8e77hp9pgr3mxxlyk41v9wg6yfcjtlw5kogxjvk8r8emt7389g219p31sju73f3dyfm7hciqwhll6515pab8hm0l01rfssev8b3nib1fs0y2e9i6ecsegb1u1ftm1bdzdnbary9y7ypk96z0wbddy79gxd6r53sscfrq6agsz2qxb72ldnjjf0tj0g1z64hhhfyggi54xsmopxsrf1bmto4o0pfxlw9x95nqgpq8usqmfnxj2ozqdkda6qv1m5gqxnrrpl7w7nmrvv6h0iaq4zmxp7ztuutrkuvkvehjd8oadyiv81m8louc66kq47ihkacm1gjdhlifjgi4554eo58nvojv7jk8vb7vdrcdzrkpddt1z4fxjo4nyd2atf7umyvgyiijo95jq7shi1xhh662xkixfpm8wn7qct8xy0q3ppns84lew8fqaxui9vs8p3qme5ggq1zol26kygi29s5u3efksu4t0i31obyi0cypj1wwi24gqxf0ekv7n6u0t7o7snz2iwlh6mpai4pf0z5o5ztsrct1urh6f6264m02yb0pr7qrhqt8epx77o6z3pmkpck54cj2fx00cr6psz76h4jqjdwl5oslf3tafc8k1jgqw1yegfnlapz12snlwfemo6seb1ysv2bm5qeknoow1xafdplyek4nw1jpmctcr11h681aoru2d42z1r7fu88emgfa3b3oug64lujfmhov9cm835nr4atpy2bqk25l5hg8skfoa4mepqwukqstwai0vrexom5rdajglbmtloqtffwrulwqcuusdpy03h8da1rln6ryumbql8jk3lp60fzj5idn1x80nis072jtcs8kxwu6uxi83g2cdnlqcxh5jyh1cm1zyfe3rqiivaigb9kkf4huctdtdb3dawrvnpd5ku72e3sgdan19s87xtjqhci9pj7x0vqeoubg4xhn4g2cj5hf1uvhyitmmpxvjr77wrgqcsjisr5k27jwi07e8zefc5bzi8nf7weaps199eq45s4pt9vdncfqjr8rzbq0nhz8i4q98q76i3j6joptak30ursrk7ban5pt4hy7j20znk3b6bjtxktzpyckzeh5rw96tsrrfxsyuiks5y2mplvn14b5go3d1d14wt61vlbbwmzww66gnil8lqkcvqf7m95r6jkp58xja0dle4ie6u7tkcw07lz99gr18ozyxlyxyddm27ua38qlw60y6611z15yigpx7w8r74u5clof0c9raheh1ajm3nsqi9ia41rorgl539kvdkzb9lw8uoz6zhwoesz3l88a7aolo1el9cfkjuqp6bkgypwr6kdi4lhj60erw5sx9vqwd874tyv3pg2knysffj2afvzwvdd3tkra1sb1spfzuzhdvd8cvmp64wvrlkc158qxvp6m6sr7e6mv9fzeesn6h18naotihjkxkxci023rx596brzjd5kdaudtg6max2z3ye2b4twqa430j18vhud5j7bxhey12iuwf5zng7hk3dxonal5n34rvtccee7zkiz62sv2kunsy58mlxwmywzppr4z950gu7hka9fpo86kly1ulgeiiyygvyqlkoad7cqhjifpg4cuv0sr3f40rmktxsa5v8tj7amcfenmqzzp172u7mneq73jdgvuumcioup1bixbvfidopk9ubio0szmeaxz847qeo7n0xtr2m7ir0h7hzfu2a5llosvm69qp5yftu7cr8jzzjhfh02emz8yjv0e732f6wsjy3a572szx9of50s3ms68yqzqc00hqqz8c90u5k7rrpb8bvyxsbnez51am0q3obnlab3jc6u4tg6t0udt5xjs78id1ywatif8t1qus2sureswfbk4auh9asqcavsgzlioas9whw10wm6xhlb2lcnfm9t2fr0pttleqta7a4lewdr5t5fvl46mhfivimbu6k0r704uaunncv9sidtl94k0qspfhcypn88bncm71fq49fh4yv5z4pgu8e6pay3b5y6r8uxt6pgq6x2tnzdhno0s8v8ymeg9hxx6bpftxvpcruos5ialj3uk6qt3usutssgl2l7ez1eqmxangd9zlsauywjhn082l66flkt1avgiaal9lzwsig8ov2x7ibd574qe75ujviq6hdt6wh9c1y80xo3d45ap88jwujziv28px5uam1i0hxcpi8a4wuvo244o3eqjwi9nspcfubenvb6fcdfhzefs32t19qnmlakn2j3oadv2naupc9xoqie274pzftaj5yf8xov2qtgvie0ugwrs6ndihqnd522l344e4d20otu1ijufs2t63vnxja62polf0y2kufe5ban9mp1rgtclwihuab2tfmecrxw5il8n3gnf31u12r838fnotny69hl5zl1fkluifl4ts5pffkv3rtai8bbg54lixtgucjgn37udoj65fs4gg6drgpy3ed3t7ajrutlvk7t8emfwqt7mq5g0tyxpea9zswuooas0ey19butalpzxvn0m0t3o6ue00sdjm1nkwqmhiwjhgr9exw2xwm1r53kwx4i4wkirgb0fixwxeyld5y9b5y5c96ls6d9ir0bogag4d7utqivg4tifjqlxynsr5vm0kmscpkvmwnd3ad3enksdh8d67q93458ac6n5tu2bd43vbodzsgjclumq3zxt5nhjp20587ky4v91tr3w5ak51gida37crvw0fgeribrga5msv8rrl5zo902g9k3bv3e9d41uvcdoj9es3wm3nbq6u5jvu8ouecn20316atte49pmt5lrwwxgtykdg5mzzvuiuryumeuupvo77cgccfi9w2bnhjd7bnup8yj1pze7ic60 == \j\x\0\q\l\l\6\7\c\q\l\x\n\f\r\d\4\5\4\t\z\c\6\z\4\7\j\x\s\i\e\x\a\q\w\4\c\o\y\a\i\d\0\1\v\w\w\i\3\4\t\2\v\2\i\e\9\b\g\6\t\u\6\c\w\1\y\i\n\z\4\9\u\x\p\u\3\b\y\6\b\h\c\8\u\5\7\9\r\l\9\6\k\n\c\m\b\j\l\l\n\6\b\f\j\y\2\k\3\j\w\1\z\h\q\u\r\r\s\3\m\c\l\a\l\y\n\u\r\8\6\f\f\p\h\0\s\b\c\s\n\k\x\f\m\m\m\a\p\9\n\8\x\4\z\e\6\5\f\2\6\f\6\v\y\9\7\o\v\h\o\m\3\m\n\s\c\y\d\v\1\1\v\g\o\e\7\o\f\8\r\s\8\t\c\m\4\v\c\a\c\8\s\c\a\6\v\f\3\5\7\m\y\b\7\2\t\6\e\6\u\z\s\2\f\9\g\q\t\c\z\h\1\a\o\a\2\r\i\l\7\b\0\n\j\n\c\f\z\h\8\p\f\t\h\t\n\a\6\5\1\f\n\6\q\4\i\c\7\2\v\p\i\q\9\4\i\8\f\2\q\e\x\6\5\4\j\8\f\c\z\5\z\0\s\x\s\b\0\o\g\p\z\9\9\m\y\g\k\l\p\c\o\a\r\8\0\l\s\m\w\c\z\5\k\w\1\s\7\v\m\m\9\4\9\i\o\3\j\s\4\b\o\n\c\5\f\w\o\5\y\5\7\v\i\c\u\x\k\x\t\y\g\x\o\3\c\v\4\8\a\j\9\3\5\x\3\w\3\h\n\v\9\p\s\t\3\v\t\o\q\q\j\e\k\t\3\a\f\w\w\m\f\v\1\d\e\s\n\1\x\a\a\v\c\e\x\8\i\z\f\n\r\1\j\n\l\3\t\5\p\x\a\5\5\9\p\8\y\v\p\4\v\a\x\1\d\z\y\6\v\s\g\3\w\q\d\m\j\7\h\x\x\a\b\v\s\l\z\p\g\r\7\a\n\g\d\8\r\e\q\7\j\6\n\x\k\q\k\e\2\l\e\s\6\i\i\5\8\7\6\a\1\x\2\x\d\p\5\9\7\3\j\j\p\b\o\n\w\y\i\t\7\z\9\c\u\u\7\e\1\a\v\1\3\7\4\1\i\9\a\w\h\n\4\t\b\5\f\r\0\o\y\f\5\v\6\n\b\n\1\z\q\k\5\c\7\7\u\v\5\m\f\g\b\o\e\o\w\6\e\m\w\t\9\p\e\z\c\5\a\u\9\3\7\7\q\x\f\b\s\d\5\j\o\d\3\r\v\3\f\f\r\t\z\k\h\v\9\o\b\p\8\u\j\5\g\6\7\z\b\7\4\t\y\t\q\e\1\c\p\o\w\7\u\1\r\f\o\j\j\o\m\3\h\4\1\q\v\l\f\x\m\9\r\n\z\d\a\d\q\g\w\u\4\b\r\n\8\z\f\1\e\v\w\r\0\v\e\x\b\e\j\f\j\x\r\v\5\v\s\l\k\s\q\o\w\4\b\a\q\s\k\w\0\d\a\j\a\7\7\z\o\p\h\r\c\3\2\0\5\d\9\r\w\o\9\e\k\4\p\3\v\y\g\q\m\1\2\t\s\c\l\o\s\v\f\y\f\a\k\0\c\q\2\v\b\8\9\k\4\d\8\f\d\4\h\d\x\z\n\i\t\s\7\x\f\i\v\9\m\t\7\7\3\m\k\q\i\8\3\w\t\p\q\9\y\d\e\p\z\w\h\9\e\t\w\z\g\y\d\j\l\1\9\q\7\b\3\r\5\l\0\2\7\s\k\a\e\1\9\1\d\d\u\w\9\p\h\q\t\u\i\2\3\5\7\m\4\l\j\e\p\s\3\6\k\g\9\e\8\e\i\1\r\p\7\5\d\9\5\v\5\j\l\g\7\u\d\g\n\2\d\n\4\m\o\y\p\i\9\i\m\s\g\9\w\7\i\z\7\a\1\l\v\u\w\l\k\6\8\f\j\c\5\o\6\w\5\1\7\u\k\d\9\p\4\l\z\v\9\y\q\o\j\p\s\b\9\w\o\f\i\2\v\3\3\k\1\b\w\u\d\3\t\0\v\a\w\d\e\w\b\j\r\h\7\5\p\p\s\i\1\k\d\o\d\1\n\i\l\i\f\d\7\m\f\m\i\6\f\9\g\j\j\3\v\w\k\c\p\4\0\o\v\t\s\v\k\w\w\j\6\6\q\x\m\9\m\9\1\4\x\o\w\h\r\r\k\p\v\j\l\k\e\a\5\o\4\h\t\9\f\p\n\w\u\g\4\g\8\m\u\h\h\p\3\3\i\w\9\2\i\d\i\3\e\b\h\9\k\1\y\a\x\e\n\3\l\8\a\9\l\i\0\p\s\j\b\y\m\x\5\l\g\h\8\8\j\8\j\y\w\4\o\c\k\d\6\m\r\u\6\t\e\p\3\4\2\8\f\0\v\h\x\q\4\a\r\a\g\l\p\k\q\i\4\8\a\p\6\f\w\t\2\j\b\q\1\o\k\f\7\c\z\j\g\i\r\7\e\k\d\s\t\i\m\q\o\3\2\x\v\p\t\y\g\2\k\z\t\1\o\d\4\v\g\6\d\9\q\y\j\1\6\1\k\w\x\c\3\g\8\e\7\7\h\p\9\p\g\r\3\m\x\x\l\y\k\4\1\v\9\w\g\6\y\f\c\j\t\l\w\5\k\o\g\x\j\v\k\8\r\8\e\m\t\7\3\8\9\g\2\1\9\p\3\1\s\j\u\7\3\f\3\d\y\f\m\7\h\c\i\q\w\h\l\l\6\5\1\5\p\a\b\8\h\m\0\l\0\1\r\f\s\s\e\v\8\b\3\n\i\b\1\f\s\0\y\2\e\9\i\6\e\c\s\e\g\b\1\u\1\f\t\m\1\b\d\z\d\n\b\a\r\y\9\y\7\y\p\k\9\6\z\0\w\b\d\d\y\7\9\g\x\d\6\r\5\3\s\s\c\f\r\q\6\a\g\s\z\2\q\x\b\7\2\l\d\n\j\j\f\0\t\j\0\g\1\z\6\4\h\h\h\f\y\g\g\i\5\4\x\s\m\o\p\x\s\r\f\1\b\m\t\o\4\o\0\p\f\x\l\w\9\x\9\5\n\q\g\p\q\8\u\s\q\m\f\n\x\j\2\o\z\q\d\k\d\a\6\q\v\1\m\5\g\q\x\n\r\r\p\l\7\w\7\n\m\r\v\v\6\h\0\i\a\q\4\z\m\x\p\7\z\t\u\u\t\r\k\u\v\k\v\e\h\j\d\8\o\a\d\y\i\v\8\1\m\8\l\o\u\c\6\6\k\q\4\7\i\h\k\a\c\m\1\g\j\d\h\l\i\f\j\g\i\4\5\5\4\e\o\5\8\n\v\o\j\v\7\j\k\8\v\b\7\v\d\r\c\d\z\r\k\p\d\d\t\1\z\4\f\x\j\o\4\n\y\d\2\a\t\f\7\u\m\y\v\g\y\i\i\j\o\9\5\j\q\7\s\h\i\1\x\h\h\6\6\2\x\k\i\x\f\p\m\8\w\n\7\q\c\t\8\x\y\0\q\3\p\p\n\s\8\4\l\e\w\8\f\q\a\x\u\i\9\v\s\8\p\3\q\m\e\5\g\g\q\1\z\o\l\2\6\k\y\g\i\2\9\s\5\u\3\e\f\k\s\u\4\t\0\i\3\1\o\b\y\i\0\c\y\p\j\1\w\w\i\2\4\g\q\x\f\0\e\k\v\7\n\6\u\0\t\7\o\7\s\n\z\2\i\w\l\h\6\m\p\a\i\4\p\f\0\z\5\o\5\z\t\s\r\c\t\1\u\r\h\6\f\6\2\6\4\m\0\2\y\b\0\p\r\7\q\r\h\q\t\8\e\p\x\7\7\o\6\z\3\p\m\k\p\c\k\5\4\c\j\2\f\x\0\0\c\r\6\p\s\z\7\6\h\4\j\q\j\d\w\l\5\o\s\l\f\3\t\a\f\c\8\k\1\j\g\q\w\1\y\e\g\f\n\l\a\p\z\1\2\s\n\l\w\f\e\m\o\6\s\e\b\1\y\s\v\2\b\m\5\q\e\k\n\o\o\w\1\x\a\f\d\p\l\y\e\k\4\n\w\1\j\p\m\c\t\c\r\1\1\h\6\8\1\a\o\r\u\2\d\4\2\z\1\r\7\f\u\8\8\e\m\g\f\a\3\b\3\o\u\g\6\4\l\u\j\f\m\h\o\v\9\c\m\8\3\5\n\r\4\a\t\p\y\2\b\q\k\2\5\l\5\h\g\8\s\k\f\o\a\4\m\e\p\q\w\u\k\q\s\t\w\a\i\0\v\r\e\x\o\m\5\r\d\a\j\g\l\b\m\t\l\o\q\t\f\f\w\r\u\l\w\q\c\u\u\s\d\p\y\0\3\h\8\d\a\1\r\l\n\6\r\y\u\m\b\q\l\8\j\k\3\l\p\6\0\f\z\j\5\i\d\n\1\x\8\0\n\i\s\0\7\2\j\t\c\s\8\k\x\w\u\6\u\x\i\8\3\g\2\c\d\n\l\q\c\x\h\5\j\y\h\1\c\m\1\z\y\f\e\3\r\q\i\i\v\a\i\g\b\9\k\k\f\4\h\u\c\t\d\t\d\b\3\d\a\w\r\v\n\p\d\5\k\u\7\2\e\3\s\g\d\a\n\1\9\s\8\7\x\t\j\q\h\c\i\9\p\j\7\x\0\v\q\e\o\u\b\g\4\x\h\n\4\g\2\c\j\5\h\f\1\u\v\h\y\i\t\m\m\p\x\v\j\r\7\7\w\r\g\q\c\s\j\i\s\r\5\k\2\7\j\w\i\0\7\e\8\z\e\f\c\5\b\z\i\8\n\f\7\w\e\a\p\s\1\9\9\e\q\4\5\s\4\p\t\9\v\d\n\c\f\q\j\r\8\r\z\b\q\0\n\h\z\8\i\4\q\9\8\q\7\6\i\3\j\6\j\o\p\t\a\k\3\0\u\r\s\r\k\7\b\a\n\5\p\t\4\h\y\7\j\2\0\z\n\k\3\b\6\b\j\t\x\k\t\z\p\y\c\k\z\e\h\5\r\w\9\6\t\s\r\r\f\x\s\y\u\i\k\s\5\y\2\m\p\l\v\n\1\4\b\5\g\o\3\d\1\d\1\4\w\t\6\1\v\l\b\b\w\m\z\w\w\6\6\g\n\i\l\8\l\q\k\c\v\q\f\7\m\9\5\r\6\j\k\p\5\8\x\j\a\0\d\l\e\4\i\e\6\u\7\t\k\c\w\0\7\l\z\9\9\g\r\1\8\o\z\y\x\l\y\x\y\d\d\m\2\7\u\a\3\8\q\l\w\6\0\y\6\6\1\1\z\1\5\y\i\g\p\x\7\w\8\r\7\4\u\5\c\l\o\f\0\c\9\r\a\h\e\h\1\a\j\m\3\n\s\q\i\9\i\a\4\1\r\o\r\g\l\5\3\9\k\v\d\k\z\b\9\l\w\8\u\o\z\6\z\h\w\o\e\s\z\3\l\8\8\a\7\a\o\l\o\1\e\l\9\c\f\k\j\u\q\p\6\b\k\g\y\p\w\r\6\k\d\i\4\l\h\j\6\0\e\r\w\5\s\x\9\v\q\w\d\8\7\4\t\y\v\3\p\g\2\k\n\y\s\f\f\j\2\a\f\v\z\w\v\d\d\3\t\k\r\a\1\s\b\1\s\p\f\z\u\z\h\d\v\d\8\c\v\m\p\6\4\w\v\r\l\k\c\1\5\8\q\x\v\p\6\m\6\s\r\7\e\6\m\v\9\f\z\e\e\s\n\6\h\1\8\n\a\o\t\i\h\j\k\x\k\x\c\i\0\2\3\r\x\5\9\6\b\r\z\j\d\5\k\d\a\u\d\t\g\6\m\a\x\2\z\3\y\e\2\b\4\t\w\q\a\4\3\0\j\1\8\v\h\u\d\5\j\7\b\x\h\e\y\1\2\i\u\w\f\5\z\n\g\7\h\k\3\d\x\o\n\a\l\5\n\3\4\r\v\t\c\c\e\e\7\z\k\i\z\6\2\s\v\2\k\u\n\s\y\5\8\m\l\x\w\m\y\w\z\p\p\r\4\z\9\5\0\g\u\7\h\k\a\9\f\p\o\8\6\k\l\y\1\u\l\g\e\i\i\y\y\g\v\y\q\l\k\o\a\d\7\c\q\h\j\i\f\p\g\4\c\u\v\0\s\r\3\f\4\0\r\m\k\t\x\s\a\5\v\8\t\j\7\a\m\c\f\e\n\m\q\z\z\p\1\7\2\u\7\m\n\e\q\7\3\j\d\g\v\u\u\m\c\i\o\u\p\1\b\i\x\b\v\f\i\d\o\p\k\9\u\b\i\o\0\s\z\m\e\a\x\z\8\4\7\q\e\o\7\n\0\x\t\r\2\m\7\i\r\0\h\7\h\z\f\u\2\a\5\l\l\o\s\v\m\6\9\q\p\5\y\f\t\u\7\c\r\8\j\z\z\j\h\f\h\0\2\e\m\z\8\y\j\v\0\e\7\3\2\f\6\w\s\j\y\3\a\5\7\2\s\z\x\9\o\f\5\0\s\3\m\s\6\8\y\q\z\q\c\0\0\h\q\q\z\8\c\9\0\u\5\k\7\r\r\p\b\8\b\v\y\x\s\b\n\e\z\5\1\a\m\0\q\3\o\b\n\l\a\b\3\j\c\6\u\4\t\g\6\t\0\u\d\t\5\x\j\s\7\8\i\d\1\y\w\a\t\i\f\8\t\1\q\u\s\2\s\u\r\e\s\w\f\b\k\4\a\u\h\9\a\s\q\c\a\v\s\g\z\l\i\o\a\s\9\w\h\w\1\0\w\m\6\x\h\l\b\2\l\c\n\f\m\9\t\2\f\r\0\p\t\t\l\e\q\t\a\7\a\4\l\e\w\d\r\5\t\5\f\v\l\4\6\m\h\f\i\v\i\m\b\u\6\k\0\r\7\0\4\u\a\u\n\n\c\v\9\s\i\d\t\l\9\4\k\0\q\s\p\f\h\c\y\p\n\8\8\b\n\c\m\7\1\f\q\4\9\f\h\4\y\v\5\z\4\p\g\u\8\e\6\p\a\y\3\b\5\y\6\r\8\u\x\t\6\p\g\q\6\x\2\t\n\z\d\h\n\o\0\s\8\v\8\y\m\e\g\9\h\x\x\6\b\p\f\t\x\v\p\c\r\u\o\s\5\i\a\l\j\3\u\k\6\q\t\3\u\s\u\t\s\s\g\l\2\l\7\e\z\1\e\q\m\x\a\n\g\d\9\z\l\s\a\u\y\w\j\h\n\0\8\2\l\6\6\f\l\k\t\1\a\v\g\i\a\a\l\9\l\z\w\s\i\g\8\o\v\2\x\7\i\b\d\5\7\4\q\e\7\5\u\j\v\i\q\6\h\d\t\6\w\h\9\c\1\y\8\0\x\o\3\d\4\5\a\p\8\8\j\w\u\j\z\i\v\2\8\p\x\5\u\a\m\1\i\0\h\x\c\p\i\8\a\4\w\u\v\o\2\4\4\o\3\e\q\j\w\i\9\n\s\p\c\f\u\b\e\n\v\b\6\f\c\d\f\h\z\e\f\s\3\2\t\1\9\q\n\m\l\a\k\n\2\j\3\o\a\d\v\2\n\a\u\p\c\9\x\o\q\i\e\2\7\4\p\z\f\t\a\j\5\y\f\8\x\o\v\2\q\t\g\v\i\e\0\u\g\w\r\s\6\n\d\i\h\q\n\d\5\2\2\l\3\4\4\e\4\d\2\0\o\t\u\1\i\j\u\f\s\2\t\6\3\v\n\x\j\a\6\2\p\o\l\f\0\y\2\k\u\f\e\5\b\a\n\9\m\p\1\r\g\t\c\l\w\i\h\u\a\b\2\t\f\m\e\c\r\x\w\5\i\l\8\n\3\g\n\f\3\1\u\1\2\r\8\3\8\f\n\o\t\n\y\6\9\h\l\5\z\l\1\f\k\l\u\i\f\l\4\t\s\5\p\f\f\k\v\3\r\t\a\i\8\b\b\g\5\4\l\i\x\t\g\u\c\j\g\n\3\7\u\d\o\j\6\5\f\s\4\g\g\6\d\r\g\p\y\3\e\d\3\t\7\a\j\r\u\t\l\v\k\7\t\8\e\m\f\w\q\t\7\m\q\5\g\0\t\y\x\p\e\a\9\z\s\w\u\o\o\a\s\0\e\y\1\9\b\u\t\a\l\p\z\x\v\n\0\m\0\t\3\o\6\u\e\0\0\s\d\j\m\1\n\k\w\q\m\h\i\w\j\h\g\r\9\e\x\w\2\x\w\m\1\r\5\3\k\w\x\4\i\4\w\k\i\r\g\b\0\f\i\x\w\x\e\y\l\d\5\y\9\b\5\y\5\c\9\6\l\s\6\d\9\i\r\0\b\o\g\a\g\4\d\7\u\t\q\i\v\g\4\t\i\f\j\q\l\x\y\n\s\r\5\v\m\0\k\m\s\c\p\k\v\m\w\n\d\3\a\d\3\e\n\k\s\d\h\8\d\6\7\q\9\3\4\5\8\a\c\6\n\5\t\u\2\b\d\4\3\v\b\o\d\z\s\g\j\c\l\u\m\q\3\z\x\t\5\n\h\j\p\2\0\5\8\7\k\y\4\v\9\1\t\r\3\w\5\a\k\5\1\g\i\d\a\3\7\c\r\v\w\0\f\g\e\r\i\b\r\g\a\5\m\s\v\8\r\r\l\5\z\o\9\0\2\g\9\k\3\b\v\3\e\9\d\4\1\u\v\c\d\o\j\9\e\s\3\w\m\3\n\b\q\6\u\5\j\v\u\8\o\u\e\c\n\2\0\3\1\6\a\t\t\e\4\9\p\m\t\5\l\r\w\w\x\g\t\y\k\d\g\5\m\z\z\v\u\i\u\r\y\u\m\e\u\u\p\v\o\7\7\c\g\c\c\f\i\9\w\2\b\n\h\j\d\7\b\n\u\p\8\y\j\1\p\z\e\7\i\c\6\0 ]] 00:05:57.632 00:05:57.632 real 0m1.284s 00:05:57.632 user 0m0.868s 00:05:57.632 sys 0m0.599s 00:05:57.632 10:23:58 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:57.632 10:23:58 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:05:57.632 10:23:58 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:05:57.632 10:23:58 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:05:57.632 10:23:58 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:57.632 10:23:58 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:57.632 10:23:58 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:05:57.632 10:23:58 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:57.632 10:23:58 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:05:57.632 10:23:58 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:57.632 10:23:58 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:05:57.632 10:23:58 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:57.632 10:23:58 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:57.632 [2024-11-15 10:23:58.339394] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:05:57.632 [2024-11-15 10:23:58.339502] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60000 ] 00:05:57.632 { 00:05:57.632 "subsystems": [ 00:05:57.632 { 00:05:57.632 "subsystem": "bdev", 00:05:57.632 "config": [ 00:05:57.632 { 00:05:57.632 "params": { 00:05:57.632 "trtype": "pcie", 00:05:57.632 "traddr": "0000:00:10.0", 00:05:57.632 "name": "Nvme0" 00:05:57.632 }, 00:05:57.632 "method": "bdev_nvme_attach_controller" 00:05:57.632 }, 00:05:57.632 { 00:05:57.632 "method": "bdev_wait_for_examine" 00:05:57.632 } 00:05:57.632 ] 00:05:57.632 } 00:05:57.632 ] 00:05:57.632 } 00:05:57.890 [2024-11-15 10:23:58.486214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.890 [2024-11-15 10:23:58.535311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.890 [2024-11-15 10:23:58.593695] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:57.890  [2024-11-15T10:23:59.003Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:58.150 00:05:58.150 10:23:58 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:58.151 ************************************ 00:05:58.151 END TEST spdk_dd_basic_rw 00:05:58.151 ************************************ 00:05:58.151 00:05:58.151 real 0m17.277s 00:05:58.151 user 0m12.313s 00:05:58.151 sys 0m6.655s 00:05:58.151 10:23:58 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:58.151 10:23:58 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:58.151 10:23:58 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:05:58.151 10:23:58 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:58.151 10:23:58 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:58.151 10:23:58 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:05:58.151 ************************************ 00:05:58.151 START TEST spdk_dd_posix 00:05:58.151 ************************************ 00:05:58.151 10:23:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:05:58.410 * Looking for test storage... 00:05:58.410 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:58.410 10:23:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:58.410 10:23:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1691 -- # lcov --version 00:05:58.410 10:23:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:58.410 10:23:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:58.410 10:23:59 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:58.410 10:23:59 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:58.410 10:23:59 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:58.410 10:23:59 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:05:58.410 10:23:59 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:05:58.410 10:23:59 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:05:58.410 10:23:59 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:05:58.410 10:23:59 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:05:58.410 10:23:59 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:05:58.410 10:23:59 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:05:58.410 10:23:59 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:58.410 10:23:59 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:05:58.410 10:23:59 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:05:58.410 10:23:59 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:58.410 10:23:59 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:58.410 10:23:59 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:05:58.411 10:23:59 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:05:58.411 10:23:59 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:58.411 10:23:59 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:05:58.411 10:23:59 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:05:58.411 10:23:59 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:05:58.411 10:23:59 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:05:58.411 10:23:59 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:58.411 10:23:59 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:05:58.411 10:23:59 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:05:58.411 10:23:59 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:58.411 10:23:59 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:58.411 10:23:59 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:05:58.411 10:23:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:58.411 10:23:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:58.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.411 --rc genhtml_branch_coverage=1 00:05:58.411 --rc genhtml_function_coverage=1 00:05:58.411 --rc genhtml_legend=1 00:05:58.411 --rc geninfo_all_blocks=1 00:05:58.411 --rc geninfo_unexecuted_blocks=1 00:05:58.411 00:05:58.411 ' 00:05:58.411 10:23:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:58.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.411 --rc genhtml_branch_coverage=1 00:05:58.411 --rc genhtml_function_coverage=1 00:05:58.411 --rc genhtml_legend=1 00:05:58.411 --rc geninfo_all_blocks=1 00:05:58.411 --rc geninfo_unexecuted_blocks=1 00:05:58.411 00:05:58.411 ' 00:05:58.411 10:23:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:58.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.411 --rc genhtml_branch_coverage=1 00:05:58.411 --rc genhtml_function_coverage=1 00:05:58.411 --rc genhtml_legend=1 00:05:58.411 --rc geninfo_all_blocks=1 00:05:58.411 --rc geninfo_unexecuted_blocks=1 00:05:58.411 00:05:58.411 ' 00:05:58.411 10:23:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:58.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.411 --rc genhtml_branch_coverage=1 00:05:58.411 --rc genhtml_function_coverage=1 00:05:58.411 --rc genhtml_legend=1 00:05:58.411 --rc geninfo_all_blocks=1 00:05:58.411 --rc geninfo_unexecuted_blocks=1 00:05:58.411 00:05:58.411 ' 00:05:58.411 10:23:59 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:58.411 10:23:59 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:05:58.411 10:23:59 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:58.411 10:23:59 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:58.411 10:23:59 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:58.411 10:23:59 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.411 10:23:59 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.411 10:23:59 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.411 10:23:59 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:05:58.411 10:23:59 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.411 10:23:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:05:58.411 10:23:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:05:58.411 10:23:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:05:58.411 10:23:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:05:58.411 10:23:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:58.411 10:23:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:58.411 10:23:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:05:58.411 10:23:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:05:58.411 * First test run, liburing in use 00:05:58.411 10:23:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:05:58.411 10:23:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:58.411 10:23:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:58.411 10:23:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:58.411 ************************************ 00:05:58.411 START TEST dd_flag_append 00:05:58.411 ************************************ 00:05:58.411 10:23:59 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1127 -- # append 00:05:58.411 10:23:59 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:05:58.411 10:23:59 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:05:58.411 10:23:59 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:05:58.411 10:23:59 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:05:58.411 10:23:59 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:05:58.411 10:23:59 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=w0k7j109uky6b2i6u2fqevbffv20g3js 00:05:58.411 10:23:59 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:05:58.411 10:23:59 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:05:58.411 10:23:59 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:05:58.411 10:23:59 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=ap2jjad22da3519vkndmyivqew3d1zni 00:05:58.411 10:23:59 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s w0k7j109uky6b2i6u2fqevbffv20g3js 00:05:58.411 10:23:59 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s ap2jjad22da3519vkndmyivqew3d1zni 00:05:58.411 10:23:59 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:05:58.411 [2024-11-15 10:23:59.223535] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:05:58.411 [2024-11-15 10:23:59.223623] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60062 ] 00:05:58.670 [2024-11-15 10:23:59.363269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.670 [2024-11-15 10:23:59.421535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.670 [2024-11-15 10:23:59.478243] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:58.670  [2024-11-15T10:23:59.783Z] Copying: 32/32 [B] (average 31 kBps) 00:05:58.930 00:05:58.930 ************************************ 00:05:58.930 END TEST dd_flag_append 00:05:58.930 ************************************ 00:05:58.930 10:23:59 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ ap2jjad22da3519vkndmyivqew3d1zniw0k7j109uky6b2i6u2fqevbffv20g3js == \a\p\2\j\j\a\d\2\2\d\a\3\5\1\9\v\k\n\d\m\y\i\v\q\e\w\3\d\1\z\n\i\w\0\k\7\j\1\0\9\u\k\y\6\b\2\i\6\u\2\f\q\e\v\b\f\f\v\2\0\g\3\j\s ]] 00:05:58.930 00:05:58.930 real 0m0.529s 00:05:58.930 user 0m0.273s 00:05:58.930 sys 0m0.272s 00:05:58.930 10:23:59 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:58.930 10:23:59 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:05:58.930 10:23:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:05:58.930 10:23:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:58.930 10:23:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:58.930 10:23:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:58.930 ************************************ 00:05:58.930 START TEST dd_flag_directory 00:05:58.930 ************************************ 00:05:58.930 10:23:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1127 -- # directory 00:05:58.930 10:23:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:58.930 10:23:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:05:58.930 10:23:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:58.930 10:23:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:58.930 10:23:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:58.930 10:23:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:58.930 10:23:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:58.930 10:23:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:58.930 10:23:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:58.930 10:23:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:58.930 10:23:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:58.930 10:23:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:59.189 [2024-11-15 10:23:59.792562] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:05:59.189 [2024-11-15 10:23:59.792650] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60096 ] 00:05:59.189 [2024-11-15 10:23:59.934594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.189 [2024-11-15 10:23:59.979466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.189 [2024-11-15 10:24:00.036325] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:59.447 [2024-11-15 10:24:00.073239] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:05:59.447 [2024-11-15 10:24:00.073295] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:05:59.447 [2024-11-15 10:24:00.073329] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:59.447 [2024-11-15 10:24:00.187182] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:05:59.447 10:24:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:05:59.447 10:24:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:59.447 10:24:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:05:59.447 10:24:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:05:59.447 10:24:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:05:59.447 10:24:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:59.447 10:24:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:05:59.447 10:24:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:05:59.447 10:24:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:05:59.447 10:24:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:59.447 10:24:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:59.447 10:24:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:59.447 10:24:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:59.447 10:24:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:59.447 10:24:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:59.447 10:24:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:59.447 10:24:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:59.447 10:24:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:05:59.707 [2024-11-15 10:24:00.308238] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:05:59.707 [2024-11-15 10:24:00.308560] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60106 ] 00:05:59.707 [2024-11-15 10:24:00.455396] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.707 [2024-11-15 10:24:00.508892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.966 [2024-11-15 10:24:00.566640] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:59.966 [2024-11-15 10:24:00.604463] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:05:59.966 [2024-11-15 10:24:00.604808] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:05:59.966 [2024-11-15 10:24:00.604853] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:59.966 [2024-11-15 10:24:00.719510] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:05:59.966 10:24:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:05:59.966 10:24:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:59.966 10:24:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:05:59.966 10:24:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:05:59.966 10:24:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:05:59.966 ************************************ 00:05:59.966 END TEST dd_flag_directory 00:05:59.966 ************************************ 00:05:59.966 10:24:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:59.966 00:05:59.966 real 0m1.037s 00:05:59.966 user 0m0.544s 00:05:59.966 sys 0m0.282s 00:05:59.966 10:24:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:59.966 10:24:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:06:00.225 10:24:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:06:00.225 10:24:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:00.225 10:24:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:00.225 10:24:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:00.225 ************************************ 00:06:00.225 START TEST dd_flag_nofollow 00:06:00.225 ************************************ 00:06:00.225 10:24:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1127 -- # nofollow 00:06:00.225 10:24:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:00.225 10:24:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:00.225 10:24:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:00.226 10:24:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:00.226 10:24:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:00.226 10:24:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:06:00.226 10:24:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:00.226 10:24:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:00.226 10:24:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:00.226 10:24:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:00.226 10:24:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:00.226 10:24:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:00.226 10:24:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:00.226 10:24:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:00.226 10:24:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:00.226 10:24:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:00.226 [2024-11-15 10:24:00.896753] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:06:00.226 [2024-11-15 10:24:00.896972] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60134 ] 00:06:00.226 [2024-11-15 10:24:01.038261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.484 [2024-11-15 10:24:01.088998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.484 [2024-11-15 10:24:01.141294] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:00.484 [2024-11-15 10:24:01.177156] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:00.484 [2024-11-15 10:24:01.177209] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:00.484 [2024-11-15 10:24:01.177242] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:00.484 [2024-11-15 10:24:01.296101] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:00.743 10:24:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:06:00.743 10:24:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:00.743 10:24:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:06:00.743 10:24:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:06:00.743 10:24:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:06:00.743 10:24:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:00.743 10:24:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:00.743 10:24:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:06:00.743 10:24:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:00.743 10:24:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:00.743 10:24:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:00.743 10:24:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:00.743 10:24:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:00.743 10:24:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:00.743 10:24:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:00.743 10:24:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:00.743 10:24:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:00.743 10:24:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:00.743 [2024-11-15 10:24:01.414947] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:06:00.743 [2024-11-15 10:24:01.415265] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60144 ] 00:06:00.743 [2024-11-15 10:24:01.557119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.003 [2024-11-15 10:24:01.605909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.003 [2024-11-15 10:24:01.660932] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:01.003 [2024-11-15 10:24:01.697635] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:01.003 [2024-11-15 10:24:01.697703] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:01.003 [2024-11-15 10:24:01.697738] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:01.003 [2024-11-15 10:24:01.816935] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:01.262 10:24:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:06:01.262 10:24:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:01.262 10:24:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:06:01.262 10:24:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:06:01.262 10:24:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:06:01.262 10:24:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:01.262 10:24:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:06:01.262 10:24:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:06:01.262 10:24:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:01.262 10:24:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:01.262 [2024-11-15 10:24:01.947272] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:06:01.262 [2024-11-15 10:24:01.947374] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60153 ] 00:06:01.262 [2024-11-15 10:24:02.094561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.520 [2024-11-15 10:24:02.142015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.520 [2024-11-15 10:24:02.197642] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:01.520  [2024-11-15T10:24:02.633Z] Copying: 512/512 [B] (average 500 kBps) 00:06:01.780 00:06:01.780 ************************************ 00:06:01.780 END TEST dd_flag_nofollow 00:06:01.780 ************************************ 00:06:01.780 10:24:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ a4lacudkdrfd155kvzhpsbqwcox3gz6krqof5bdjwunllz7yjo0zpxb7oh9h0685ts078yx6ue8g6p2yc8osxfx0fvkl6nm4fcfqaqrmh3tadz29neqkrnol4zdepnsxkpwdkh951fqajr7qd9zojra5qcfmqn78sc0d9rpgskzhtnodh99n5kvrihp0pipvu2x8kr8guibxqzrim04jzbzxgsmtfua1ggsqsixquy7bqipjltdxhf5ufik87xuipl891qke35o7mbyv6p7wal10wcanb555ww8aii68wnva5dy160jhr6edt4g546cy2kn7m1yk3ceqkvy2sj316hu38w4j2cj978fe4g830pd4drdppz39srv6n2ou7anj2h1i48p883gie9lp8s0skhufgivlbe1gdv1u9m7vtadt5wxjz56ioktjvria6vsj6av3w3a65yyfyx7uazyeyqslnxcnryycr44pdm0pb3wlkffy1fyub4ziqh3pkuwh == \a\4\l\a\c\u\d\k\d\r\f\d\1\5\5\k\v\z\h\p\s\b\q\w\c\o\x\3\g\z\6\k\r\q\o\f\5\b\d\j\w\u\n\l\l\z\7\y\j\o\0\z\p\x\b\7\o\h\9\h\0\6\8\5\t\s\0\7\8\y\x\6\u\e\8\g\6\p\2\y\c\8\o\s\x\f\x\0\f\v\k\l\6\n\m\4\f\c\f\q\a\q\r\m\h\3\t\a\d\z\2\9\n\e\q\k\r\n\o\l\4\z\d\e\p\n\s\x\k\p\w\d\k\h\9\5\1\f\q\a\j\r\7\q\d\9\z\o\j\r\a\5\q\c\f\m\q\n\7\8\s\c\0\d\9\r\p\g\s\k\z\h\t\n\o\d\h\9\9\n\5\k\v\r\i\h\p\0\p\i\p\v\u\2\x\8\k\r\8\g\u\i\b\x\q\z\r\i\m\0\4\j\z\b\z\x\g\s\m\t\f\u\a\1\g\g\s\q\s\i\x\q\u\y\7\b\q\i\p\j\l\t\d\x\h\f\5\u\f\i\k\8\7\x\u\i\p\l\8\9\1\q\k\e\3\5\o\7\m\b\y\v\6\p\7\w\a\l\1\0\w\c\a\n\b\5\5\5\w\w\8\a\i\i\6\8\w\n\v\a\5\d\y\1\6\0\j\h\r\6\e\d\t\4\g\5\4\6\c\y\2\k\n\7\m\1\y\k\3\c\e\q\k\v\y\2\s\j\3\1\6\h\u\3\8\w\4\j\2\c\j\9\7\8\f\e\4\g\8\3\0\p\d\4\d\r\d\p\p\z\3\9\s\r\v\6\n\2\o\u\7\a\n\j\2\h\1\i\4\8\p\8\8\3\g\i\e\9\l\p\8\s\0\s\k\h\u\f\g\i\v\l\b\e\1\g\d\v\1\u\9\m\7\v\t\a\d\t\5\w\x\j\z\5\6\i\o\k\t\j\v\r\i\a\6\v\s\j\6\a\v\3\w\3\a\6\5\y\y\f\y\x\7\u\a\z\y\e\y\q\s\l\n\x\c\n\r\y\y\c\r\4\4\p\d\m\0\p\b\3\w\l\k\f\f\y\1\f\y\u\b\4\z\i\q\h\3\p\k\u\w\h ]] 00:06:01.780 00:06:01.780 real 0m1.586s 00:06:01.780 user 0m0.848s 00:06:01.780 sys 0m0.569s 00:06:01.780 10:24:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:01.780 10:24:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:01.780 10:24:02 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:06:01.780 10:24:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:01.780 10:24:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:01.780 10:24:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:01.780 ************************************ 00:06:01.780 START TEST dd_flag_noatime 00:06:01.780 ************************************ 00:06:01.780 10:24:02 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1127 -- # noatime 00:06:01.780 10:24:02 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:06:01.780 10:24:02 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:06:01.780 10:24:02 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:06:01.781 10:24:02 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:06:01.781 10:24:02 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:01.781 10:24:02 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:01.781 10:24:02 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1731666242 00:06:01.781 10:24:02 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:01.781 10:24:02 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1731666242 00:06:01.781 10:24:02 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:06:02.718 10:24:03 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:02.718 [2024-11-15 10:24:03.546231] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:06:02.718 [2024-11-15 10:24:03.546540] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60201 ] 00:06:02.977 [2024-11-15 10:24:03.700127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.977 [2024-11-15 10:24:03.760439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.977 [2024-11-15 10:24:03.819273] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:03.236  [2024-11-15T10:24:04.089Z] Copying: 512/512 [B] (average 500 kBps) 00:06:03.236 00:06:03.236 10:24:04 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:03.236 10:24:04 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1731666242 )) 00:06:03.236 10:24:04 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:03.236 10:24:04 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1731666242 )) 00:06:03.236 10:24:04 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:03.494 [2024-11-15 10:24:04.104005] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:06:03.494 [2024-11-15 10:24:04.104339] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60210 ] 00:06:03.494 [2024-11-15 10:24:04.250858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.494 [2024-11-15 10:24:04.304615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.752 [2024-11-15 10:24:04.360263] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:03.752  [2024-11-15T10:24:04.605Z] Copying: 512/512 [B] (average 500 kBps) 00:06:03.752 00:06:03.752 10:24:04 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:03.752 ************************************ 00:06:03.752 END TEST dd_flag_noatime 00:06:03.752 ************************************ 00:06:03.752 10:24:04 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1731666244 )) 00:06:03.752 00:06:03.752 real 0m2.118s 00:06:03.752 user 0m0.593s 00:06:03.752 sys 0m0.577s 00:06:03.752 10:24:04 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:03.752 10:24:04 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:04.010 10:24:04 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:06:04.010 10:24:04 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:04.010 10:24:04 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:04.010 10:24:04 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:04.010 ************************************ 00:06:04.010 START TEST dd_flags_misc 00:06:04.010 ************************************ 00:06:04.010 10:24:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1127 -- # io 00:06:04.010 10:24:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:04.010 10:24:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:04.010 10:24:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:04.010 10:24:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:04.010 10:24:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:04.010 10:24:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:04.010 10:24:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:04.010 10:24:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:04.010 10:24:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:04.010 [2024-11-15 10:24:04.695493] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:06:04.010 [2024-11-15 10:24:04.695582] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60244 ] 00:06:04.010 [2024-11-15 10:24:04.833644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.269 [2024-11-15 10:24:04.880166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.269 [2024-11-15 10:24:04.935034] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:04.269  [2024-11-15T10:24:05.381Z] Copying: 512/512 [B] (average 500 kBps) 00:06:04.528 00:06:04.529 10:24:05 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ hsa30cpiaveer5fs3vcjv6y5w4m2p2kuegjnn188y0424td1olbv65nqp4pvmbw5850xhf7gd014d3ognhy481ic50ld84qx7rpigqc7e402kckwgakqh8mrkv4zsy5vcun9v07chn3fs5if2px61q55wjxsmdvb44xsvcnxquyy8s5bavl0plieq35k4rzvrjobx9tbu13ul2yfyp6hpnr7uy1rta4g6ctgu20j4i6qtzit5sffd081l4cox8ckqcrpfh8leb8qyihcgnmmen4248jp9p7cbq10eq1iq4lhryyrn2dsa8mdpan54802zsfp5ob5v0ab2v0p6wwd2ze2gbpit3nikwrtwqvatovnmm1175is6eagtkcsmm3p9gvgo7wh6pbjdk9enqezwk8z4onxnghfdl811jcl6jqbuqrpppmpomvmzigbxbz2wyj73429vvn0uswq2odwupvdmw4dwnja9tezgof6ktnu5h5cf6l0tc6lr8h657mf == \h\s\a\3\0\c\p\i\a\v\e\e\r\5\f\s\3\v\c\j\v\6\y\5\w\4\m\2\p\2\k\u\e\g\j\n\n\1\8\8\y\0\4\2\4\t\d\1\o\l\b\v\6\5\n\q\p\4\p\v\m\b\w\5\8\5\0\x\h\f\7\g\d\0\1\4\d\3\o\g\n\h\y\4\8\1\i\c\5\0\l\d\8\4\q\x\7\r\p\i\g\q\c\7\e\4\0\2\k\c\k\w\g\a\k\q\h\8\m\r\k\v\4\z\s\y\5\v\c\u\n\9\v\0\7\c\h\n\3\f\s\5\i\f\2\p\x\6\1\q\5\5\w\j\x\s\m\d\v\b\4\4\x\s\v\c\n\x\q\u\y\y\8\s\5\b\a\v\l\0\p\l\i\e\q\3\5\k\4\r\z\v\r\j\o\b\x\9\t\b\u\1\3\u\l\2\y\f\y\p\6\h\p\n\r\7\u\y\1\r\t\a\4\g\6\c\t\g\u\2\0\j\4\i\6\q\t\z\i\t\5\s\f\f\d\0\8\1\l\4\c\o\x\8\c\k\q\c\r\p\f\h\8\l\e\b\8\q\y\i\h\c\g\n\m\m\e\n\4\2\4\8\j\p\9\p\7\c\b\q\1\0\e\q\1\i\q\4\l\h\r\y\y\r\n\2\d\s\a\8\m\d\p\a\n\5\4\8\0\2\z\s\f\p\5\o\b\5\v\0\a\b\2\v\0\p\6\w\w\d\2\z\e\2\g\b\p\i\t\3\n\i\k\w\r\t\w\q\v\a\t\o\v\n\m\m\1\1\7\5\i\s\6\e\a\g\t\k\c\s\m\m\3\p\9\g\v\g\o\7\w\h\6\p\b\j\d\k\9\e\n\q\e\z\w\k\8\z\4\o\n\x\n\g\h\f\d\l\8\1\1\j\c\l\6\j\q\b\u\q\r\p\p\p\m\p\o\m\v\m\z\i\g\b\x\b\z\2\w\y\j\7\3\4\2\9\v\v\n\0\u\s\w\q\2\o\d\w\u\p\v\d\m\w\4\d\w\n\j\a\9\t\e\z\g\o\f\6\k\t\n\u\5\h\5\c\f\6\l\0\t\c\6\l\r\8\h\6\5\7\m\f ]] 00:06:04.529 10:24:05 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:04.529 10:24:05 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:04.529 [2024-11-15 10:24:05.219709] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:06:04.529 [2024-11-15 10:24:05.220131] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60248 ] 00:06:04.529 [2024-11-15 10:24:05.367749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.803 [2024-11-15 10:24:05.425647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.803 [2024-11-15 10:24:05.482088] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:04.803  [2024-11-15T10:24:05.917Z] Copying: 512/512 [B] (average 500 kBps) 00:06:05.064 00:06:05.064 10:24:05 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ hsa30cpiaveer5fs3vcjv6y5w4m2p2kuegjnn188y0424td1olbv65nqp4pvmbw5850xhf7gd014d3ognhy481ic50ld84qx7rpigqc7e402kckwgakqh8mrkv4zsy5vcun9v07chn3fs5if2px61q55wjxsmdvb44xsvcnxquyy8s5bavl0plieq35k4rzvrjobx9tbu13ul2yfyp6hpnr7uy1rta4g6ctgu20j4i6qtzit5sffd081l4cox8ckqcrpfh8leb8qyihcgnmmen4248jp9p7cbq10eq1iq4lhryyrn2dsa8mdpan54802zsfp5ob5v0ab2v0p6wwd2ze2gbpit3nikwrtwqvatovnmm1175is6eagtkcsmm3p9gvgo7wh6pbjdk9enqezwk8z4onxnghfdl811jcl6jqbuqrpppmpomvmzigbxbz2wyj73429vvn0uswq2odwupvdmw4dwnja9tezgof6ktnu5h5cf6l0tc6lr8h657mf == \h\s\a\3\0\c\p\i\a\v\e\e\r\5\f\s\3\v\c\j\v\6\y\5\w\4\m\2\p\2\k\u\e\g\j\n\n\1\8\8\y\0\4\2\4\t\d\1\o\l\b\v\6\5\n\q\p\4\p\v\m\b\w\5\8\5\0\x\h\f\7\g\d\0\1\4\d\3\o\g\n\h\y\4\8\1\i\c\5\0\l\d\8\4\q\x\7\r\p\i\g\q\c\7\e\4\0\2\k\c\k\w\g\a\k\q\h\8\m\r\k\v\4\z\s\y\5\v\c\u\n\9\v\0\7\c\h\n\3\f\s\5\i\f\2\p\x\6\1\q\5\5\w\j\x\s\m\d\v\b\4\4\x\s\v\c\n\x\q\u\y\y\8\s\5\b\a\v\l\0\p\l\i\e\q\3\5\k\4\r\z\v\r\j\o\b\x\9\t\b\u\1\3\u\l\2\y\f\y\p\6\h\p\n\r\7\u\y\1\r\t\a\4\g\6\c\t\g\u\2\0\j\4\i\6\q\t\z\i\t\5\s\f\f\d\0\8\1\l\4\c\o\x\8\c\k\q\c\r\p\f\h\8\l\e\b\8\q\y\i\h\c\g\n\m\m\e\n\4\2\4\8\j\p\9\p\7\c\b\q\1\0\e\q\1\i\q\4\l\h\r\y\y\r\n\2\d\s\a\8\m\d\p\a\n\5\4\8\0\2\z\s\f\p\5\o\b\5\v\0\a\b\2\v\0\p\6\w\w\d\2\z\e\2\g\b\p\i\t\3\n\i\k\w\r\t\w\q\v\a\t\o\v\n\m\m\1\1\7\5\i\s\6\e\a\g\t\k\c\s\m\m\3\p\9\g\v\g\o\7\w\h\6\p\b\j\d\k\9\e\n\q\e\z\w\k\8\z\4\o\n\x\n\g\h\f\d\l\8\1\1\j\c\l\6\j\q\b\u\q\r\p\p\p\m\p\o\m\v\m\z\i\g\b\x\b\z\2\w\y\j\7\3\4\2\9\v\v\n\0\u\s\w\q\2\o\d\w\u\p\v\d\m\w\4\d\w\n\j\a\9\t\e\z\g\o\f\6\k\t\n\u\5\h\5\c\f\6\l\0\t\c\6\l\r\8\h\6\5\7\m\f ]] 00:06:05.064 10:24:05 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:05.064 10:24:05 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:05.064 [2024-11-15 10:24:05.781541] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:06:05.064 [2024-11-15 10:24:05.781691] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60263 ] 00:06:05.323 [2024-11-15 10:24:05.939425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.323 [2024-11-15 10:24:06.003295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.323 [2024-11-15 10:24:06.057555] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:05.323  [2024-11-15T10:24:06.434Z] Copying: 512/512 [B] (average 500 kBps) 00:06:05.581 00:06:05.581 10:24:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ hsa30cpiaveer5fs3vcjv6y5w4m2p2kuegjnn188y0424td1olbv65nqp4pvmbw5850xhf7gd014d3ognhy481ic50ld84qx7rpigqc7e402kckwgakqh8mrkv4zsy5vcun9v07chn3fs5if2px61q55wjxsmdvb44xsvcnxquyy8s5bavl0plieq35k4rzvrjobx9tbu13ul2yfyp6hpnr7uy1rta4g6ctgu20j4i6qtzit5sffd081l4cox8ckqcrpfh8leb8qyihcgnmmen4248jp9p7cbq10eq1iq4lhryyrn2dsa8mdpan54802zsfp5ob5v0ab2v0p6wwd2ze2gbpit3nikwrtwqvatovnmm1175is6eagtkcsmm3p9gvgo7wh6pbjdk9enqezwk8z4onxnghfdl811jcl6jqbuqrpppmpomvmzigbxbz2wyj73429vvn0uswq2odwupvdmw4dwnja9tezgof6ktnu5h5cf6l0tc6lr8h657mf == \h\s\a\3\0\c\p\i\a\v\e\e\r\5\f\s\3\v\c\j\v\6\y\5\w\4\m\2\p\2\k\u\e\g\j\n\n\1\8\8\y\0\4\2\4\t\d\1\o\l\b\v\6\5\n\q\p\4\p\v\m\b\w\5\8\5\0\x\h\f\7\g\d\0\1\4\d\3\o\g\n\h\y\4\8\1\i\c\5\0\l\d\8\4\q\x\7\r\p\i\g\q\c\7\e\4\0\2\k\c\k\w\g\a\k\q\h\8\m\r\k\v\4\z\s\y\5\v\c\u\n\9\v\0\7\c\h\n\3\f\s\5\i\f\2\p\x\6\1\q\5\5\w\j\x\s\m\d\v\b\4\4\x\s\v\c\n\x\q\u\y\y\8\s\5\b\a\v\l\0\p\l\i\e\q\3\5\k\4\r\z\v\r\j\o\b\x\9\t\b\u\1\3\u\l\2\y\f\y\p\6\h\p\n\r\7\u\y\1\r\t\a\4\g\6\c\t\g\u\2\0\j\4\i\6\q\t\z\i\t\5\s\f\f\d\0\8\1\l\4\c\o\x\8\c\k\q\c\r\p\f\h\8\l\e\b\8\q\y\i\h\c\g\n\m\m\e\n\4\2\4\8\j\p\9\p\7\c\b\q\1\0\e\q\1\i\q\4\l\h\r\y\y\r\n\2\d\s\a\8\m\d\p\a\n\5\4\8\0\2\z\s\f\p\5\o\b\5\v\0\a\b\2\v\0\p\6\w\w\d\2\z\e\2\g\b\p\i\t\3\n\i\k\w\r\t\w\q\v\a\t\o\v\n\m\m\1\1\7\5\i\s\6\e\a\g\t\k\c\s\m\m\3\p\9\g\v\g\o\7\w\h\6\p\b\j\d\k\9\e\n\q\e\z\w\k\8\z\4\o\n\x\n\g\h\f\d\l\8\1\1\j\c\l\6\j\q\b\u\q\r\p\p\p\m\p\o\m\v\m\z\i\g\b\x\b\z\2\w\y\j\7\3\4\2\9\v\v\n\0\u\s\w\q\2\o\d\w\u\p\v\d\m\w\4\d\w\n\j\a\9\t\e\z\g\o\f\6\k\t\n\u\5\h\5\c\f\6\l\0\t\c\6\l\r\8\h\6\5\7\m\f ]] 00:06:05.581 10:24:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:05.581 10:24:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:05.581 [2024-11-15 10:24:06.329426] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:06:05.581 [2024-11-15 10:24:06.329527] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60267 ] 00:06:05.840 [2024-11-15 10:24:06.477378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.840 [2024-11-15 10:24:06.528717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.840 [2024-11-15 10:24:06.583092] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:05.840  [2024-11-15T10:24:06.953Z] Copying: 512/512 [B] (average 500 kBps) 00:06:06.100 00:06:06.100 10:24:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ hsa30cpiaveer5fs3vcjv6y5w4m2p2kuegjnn188y0424td1olbv65nqp4pvmbw5850xhf7gd014d3ognhy481ic50ld84qx7rpigqc7e402kckwgakqh8mrkv4zsy5vcun9v07chn3fs5if2px61q55wjxsmdvb44xsvcnxquyy8s5bavl0plieq35k4rzvrjobx9tbu13ul2yfyp6hpnr7uy1rta4g6ctgu20j4i6qtzit5sffd081l4cox8ckqcrpfh8leb8qyihcgnmmen4248jp9p7cbq10eq1iq4lhryyrn2dsa8mdpan54802zsfp5ob5v0ab2v0p6wwd2ze2gbpit3nikwrtwqvatovnmm1175is6eagtkcsmm3p9gvgo7wh6pbjdk9enqezwk8z4onxnghfdl811jcl6jqbuqrpppmpomvmzigbxbz2wyj73429vvn0uswq2odwupvdmw4dwnja9tezgof6ktnu5h5cf6l0tc6lr8h657mf == \h\s\a\3\0\c\p\i\a\v\e\e\r\5\f\s\3\v\c\j\v\6\y\5\w\4\m\2\p\2\k\u\e\g\j\n\n\1\8\8\y\0\4\2\4\t\d\1\o\l\b\v\6\5\n\q\p\4\p\v\m\b\w\5\8\5\0\x\h\f\7\g\d\0\1\4\d\3\o\g\n\h\y\4\8\1\i\c\5\0\l\d\8\4\q\x\7\r\p\i\g\q\c\7\e\4\0\2\k\c\k\w\g\a\k\q\h\8\m\r\k\v\4\z\s\y\5\v\c\u\n\9\v\0\7\c\h\n\3\f\s\5\i\f\2\p\x\6\1\q\5\5\w\j\x\s\m\d\v\b\4\4\x\s\v\c\n\x\q\u\y\y\8\s\5\b\a\v\l\0\p\l\i\e\q\3\5\k\4\r\z\v\r\j\o\b\x\9\t\b\u\1\3\u\l\2\y\f\y\p\6\h\p\n\r\7\u\y\1\r\t\a\4\g\6\c\t\g\u\2\0\j\4\i\6\q\t\z\i\t\5\s\f\f\d\0\8\1\l\4\c\o\x\8\c\k\q\c\r\p\f\h\8\l\e\b\8\q\y\i\h\c\g\n\m\m\e\n\4\2\4\8\j\p\9\p\7\c\b\q\1\0\e\q\1\i\q\4\l\h\r\y\y\r\n\2\d\s\a\8\m\d\p\a\n\5\4\8\0\2\z\s\f\p\5\o\b\5\v\0\a\b\2\v\0\p\6\w\w\d\2\z\e\2\g\b\p\i\t\3\n\i\k\w\r\t\w\q\v\a\t\o\v\n\m\m\1\1\7\5\i\s\6\e\a\g\t\k\c\s\m\m\3\p\9\g\v\g\o\7\w\h\6\p\b\j\d\k\9\e\n\q\e\z\w\k\8\z\4\o\n\x\n\g\h\f\d\l\8\1\1\j\c\l\6\j\q\b\u\q\r\p\p\p\m\p\o\m\v\m\z\i\g\b\x\b\z\2\w\y\j\7\3\4\2\9\v\v\n\0\u\s\w\q\2\o\d\w\u\p\v\d\m\w\4\d\w\n\j\a\9\t\e\z\g\o\f\6\k\t\n\u\5\h\5\c\f\6\l\0\t\c\6\l\r\8\h\6\5\7\m\f ]] 00:06:06.100 10:24:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:06.100 10:24:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:06.100 10:24:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:06.100 10:24:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:06.100 10:24:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:06.100 10:24:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:06.100 [2024-11-15 10:24:06.861763] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:06:06.100 [2024-11-15 10:24:06.861866] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60282 ] 00:06:06.359 [2024-11-15 10:24:06.999277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.359 [2024-11-15 10:24:07.066701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.359 [2024-11-15 10:24:07.120428] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:06.359  [2024-11-15T10:24:07.471Z] Copying: 512/512 [B] (average 500 kBps) 00:06:06.618 00:06:06.618 10:24:07 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ eg3a3ztn1jckc7q45fa9s7apddl8n9de9306blxfftfpsm369keqgjuccxavrlw4xfrkbqbsebrzdnovkonguxh1hl5b7kzbgymve7h0bwrlgnddxzitjjo1yga47ag91x69qa70bygi0vqtx9581fp4okmjvy0zw7yrejbvz5ffteqk53tg2jb932doie2fl1ht2dn6v5lmnm9nrgsad74skwzsxb876hgwx800cpbyeqqrfsa2jubpteqvv8t8fv37lg9i4xq0sk4k7rhpaa5nlvyqu9g1q21a2enulbtvuhlwyjbf3zumcbmidmkx8c5bfp82ganpqh6pkh7mjaj6gp6pyjz75fzoc29qm95jx58gv5ma3kgxh6kutruq20t12iejxex9nne7mmnpnfq8mzpzmtcb1lvgpplwap7vcev0cfi9fycsavpbg7i1p33ti47p2wuck1fl538iuasyaspnsnieh5cbbzbs5cnuv57l1e7z4azqe0plesiv == \e\g\3\a\3\z\t\n\1\j\c\k\c\7\q\4\5\f\a\9\s\7\a\p\d\d\l\8\n\9\d\e\9\3\0\6\b\l\x\f\f\t\f\p\s\m\3\6\9\k\e\q\g\j\u\c\c\x\a\v\r\l\w\4\x\f\r\k\b\q\b\s\e\b\r\z\d\n\o\v\k\o\n\g\u\x\h\1\h\l\5\b\7\k\z\b\g\y\m\v\e\7\h\0\b\w\r\l\g\n\d\d\x\z\i\t\j\j\o\1\y\g\a\4\7\a\g\9\1\x\6\9\q\a\7\0\b\y\g\i\0\v\q\t\x\9\5\8\1\f\p\4\o\k\m\j\v\y\0\z\w\7\y\r\e\j\b\v\z\5\f\f\t\e\q\k\5\3\t\g\2\j\b\9\3\2\d\o\i\e\2\f\l\1\h\t\2\d\n\6\v\5\l\m\n\m\9\n\r\g\s\a\d\7\4\s\k\w\z\s\x\b\8\7\6\h\g\w\x\8\0\0\c\p\b\y\e\q\q\r\f\s\a\2\j\u\b\p\t\e\q\v\v\8\t\8\f\v\3\7\l\g\9\i\4\x\q\0\s\k\4\k\7\r\h\p\a\a\5\n\l\v\y\q\u\9\g\1\q\2\1\a\2\e\n\u\l\b\t\v\u\h\l\w\y\j\b\f\3\z\u\m\c\b\m\i\d\m\k\x\8\c\5\b\f\p\8\2\g\a\n\p\q\h\6\p\k\h\7\m\j\a\j\6\g\p\6\p\y\j\z\7\5\f\z\o\c\2\9\q\m\9\5\j\x\5\8\g\v\5\m\a\3\k\g\x\h\6\k\u\t\r\u\q\2\0\t\1\2\i\e\j\x\e\x\9\n\n\e\7\m\m\n\p\n\f\q\8\m\z\p\z\m\t\c\b\1\l\v\g\p\p\l\w\a\p\7\v\c\e\v\0\c\f\i\9\f\y\c\s\a\v\p\b\g\7\i\1\p\3\3\t\i\4\7\p\2\w\u\c\k\1\f\l\5\3\8\i\u\a\s\y\a\s\p\n\s\n\i\e\h\5\c\b\b\z\b\s\5\c\n\u\v\5\7\l\1\e\7\z\4\a\z\q\e\0\p\l\e\s\i\v ]] 00:06:06.618 10:24:07 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:06.618 10:24:07 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:06.618 [2024-11-15 10:24:07.409573] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:06:06.618 [2024-11-15 10:24:07.409689] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60286 ] 00:06:06.879 [2024-11-15 10:24:07.557895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.879 [2024-11-15 10:24:07.619431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.879 [2024-11-15 10:24:07.673632] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:06.879  [2024-11-15T10:24:07.991Z] Copying: 512/512 [B] (average 500 kBps) 00:06:07.138 00:06:07.138 10:24:07 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ eg3a3ztn1jckc7q45fa9s7apddl8n9de9306blxfftfpsm369keqgjuccxavrlw4xfrkbqbsebrzdnovkonguxh1hl5b7kzbgymve7h0bwrlgnddxzitjjo1yga47ag91x69qa70bygi0vqtx9581fp4okmjvy0zw7yrejbvz5ffteqk53tg2jb932doie2fl1ht2dn6v5lmnm9nrgsad74skwzsxb876hgwx800cpbyeqqrfsa2jubpteqvv8t8fv37lg9i4xq0sk4k7rhpaa5nlvyqu9g1q21a2enulbtvuhlwyjbf3zumcbmidmkx8c5bfp82ganpqh6pkh7mjaj6gp6pyjz75fzoc29qm95jx58gv5ma3kgxh6kutruq20t12iejxex9nne7mmnpnfq8mzpzmtcb1lvgpplwap7vcev0cfi9fycsavpbg7i1p33ti47p2wuck1fl538iuasyaspnsnieh5cbbzbs5cnuv57l1e7z4azqe0plesiv == \e\g\3\a\3\z\t\n\1\j\c\k\c\7\q\4\5\f\a\9\s\7\a\p\d\d\l\8\n\9\d\e\9\3\0\6\b\l\x\f\f\t\f\p\s\m\3\6\9\k\e\q\g\j\u\c\c\x\a\v\r\l\w\4\x\f\r\k\b\q\b\s\e\b\r\z\d\n\o\v\k\o\n\g\u\x\h\1\h\l\5\b\7\k\z\b\g\y\m\v\e\7\h\0\b\w\r\l\g\n\d\d\x\z\i\t\j\j\o\1\y\g\a\4\7\a\g\9\1\x\6\9\q\a\7\0\b\y\g\i\0\v\q\t\x\9\5\8\1\f\p\4\o\k\m\j\v\y\0\z\w\7\y\r\e\j\b\v\z\5\f\f\t\e\q\k\5\3\t\g\2\j\b\9\3\2\d\o\i\e\2\f\l\1\h\t\2\d\n\6\v\5\l\m\n\m\9\n\r\g\s\a\d\7\4\s\k\w\z\s\x\b\8\7\6\h\g\w\x\8\0\0\c\p\b\y\e\q\q\r\f\s\a\2\j\u\b\p\t\e\q\v\v\8\t\8\f\v\3\7\l\g\9\i\4\x\q\0\s\k\4\k\7\r\h\p\a\a\5\n\l\v\y\q\u\9\g\1\q\2\1\a\2\e\n\u\l\b\t\v\u\h\l\w\y\j\b\f\3\z\u\m\c\b\m\i\d\m\k\x\8\c\5\b\f\p\8\2\g\a\n\p\q\h\6\p\k\h\7\m\j\a\j\6\g\p\6\p\y\j\z\7\5\f\z\o\c\2\9\q\m\9\5\j\x\5\8\g\v\5\m\a\3\k\g\x\h\6\k\u\t\r\u\q\2\0\t\1\2\i\e\j\x\e\x\9\n\n\e\7\m\m\n\p\n\f\q\8\m\z\p\z\m\t\c\b\1\l\v\g\p\p\l\w\a\p\7\v\c\e\v\0\c\f\i\9\f\y\c\s\a\v\p\b\g\7\i\1\p\3\3\t\i\4\7\p\2\w\u\c\k\1\f\l\5\3\8\i\u\a\s\y\a\s\p\n\s\n\i\e\h\5\c\b\b\z\b\s\5\c\n\u\v\5\7\l\1\e\7\z\4\a\z\q\e\0\p\l\e\s\i\v ]] 00:06:07.138 10:24:07 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:07.138 10:24:07 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:07.138 [2024-11-15 10:24:07.960690] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:06:07.138 [2024-11-15 10:24:07.960828] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60301 ] 00:06:07.397 [2024-11-15 10:24:08.107264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.397 [2024-11-15 10:24:08.158350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.397 [2024-11-15 10:24:08.210347] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:07.397  [2024-11-15T10:24:08.509Z] Copying: 512/512 [B] (average 500 kBps) 00:06:07.656 00:06:07.656 10:24:08 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ eg3a3ztn1jckc7q45fa9s7apddl8n9de9306blxfftfpsm369keqgjuccxavrlw4xfrkbqbsebrzdnovkonguxh1hl5b7kzbgymve7h0bwrlgnddxzitjjo1yga47ag91x69qa70bygi0vqtx9581fp4okmjvy0zw7yrejbvz5ffteqk53tg2jb932doie2fl1ht2dn6v5lmnm9nrgsad74skwzsxb876hgwx800cpbyeqqrfsa2jubpteqvv8t8fv37lg9i4xq0sk4k7rhpaa5nlvyqu9g1q21a2enulbtvuhlwyjbf3zumcbmidmkx8c5bfp82ganpqh6pkh7mjaj6gp6pyjz75fzoc29qm95jx58gv5ma3kgxh6kutruq20t12iejxex9nne7mmnpnfq8mzpzmtcb1lvgpplwap7vcev0cfi9fycsavpbg7i1p33ti47p2wuck1fl538iuasyaspnsnieh5cbbzbs5cnuv57l1e7z4azqe0plesiv == \e\g\3\a\3\z\t\n\1\j\c\k\c\7\q\4\5\f\a\9\s\7\a\p\d\d\l\8\n\9\d\e\9\3\0\6\b\l\x\f\f\t\f\p\s\m\3\6\9\k\e\q\g\j\u\c\c\x\a\v\r\l\w\4\x\f\r\k\b\q\b\s\e\b\r\z\d\n\o\v\k\o\n\g\u\x\h\1\h\l\5\b\7\k\z\b\g\y\m\v\e\7\h\0\b\w\r\l\g\n\d\d\x\z\i\t\j\j\o\1\y\g\a\4\7\a\g\9\1\x\6\9\q\a\7\0\b\y\g\i\0\v\q\t\x\9\5\8\1\f\p\4\o\k\m\j\v\y\0\z\w\7\y\r\e\j\b\v\z\5\f\f\t\e\q\k\5\3\t\g\2\j\b\9\3\2\d\o\i\e\2\f\l\1\h\t\2\d\n\6\v\5\l\m\n\m\9\n\r\g\s\a\d\7\4\s\k\w\z\s\x\b\8\7\6\h\g\w\x\8\0\0\c\p\b\y\e\q\q\r\f\s\a\2\j\u\b\p\t\e\q\v\v\8\t\8\f\v\3\7\l\g\9\i\4\x\q\0\s\k\4\k\7\r\h\p\a\a\5\n\l\v\y\q\u\9\g\1\q\2\1\a\2\e\n\u\l\b\t\v\u\h\l\w\y\j\b\f\3\z\u\m\c\b\m\i\d\m\k\x\8\c\5\b\f\p\8\2\g\a\n\p\q\h\6\p\k\h\7\m\j\a\j\6\g\p\6\p\y\j\z\7\5\f\z\o\c\2\9\q\m\9\5\j\x\5\8\g\v\5\m\a\3\k\g\x\h\6\k\u\t\r\u\q\2\0\t\1\2\i\e\j\x\e\x\9\n\n\e\7\m\m\n\p\n\f\q\8\m\z\p\z\m\t\c\b\1\l\v\g\p\p\l\w\a\p\7\v\c\e\v\0\c\f\i\9\f\y\c\s\a\v\p\b\g\7\i\1\p\3\3\t\i\4\7\p\2\w\u\c\k\1\f\l\5\3\8\i\u\a\s\y\a\s\p\n\s\n\i\e\h\5\c\b\b\z\b\s\5\c\n\u\v\5\7\l\1\e\7\z\4\a\z\q\e\0\p\l\e\s\i\v ]] 00:06:07.656 10:24:08 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:07.656 10:24:08 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:07.656 [2024-11-15 10:24:08.487573] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:06:07.657 [2024-11-15 10:24:08.487709] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60311 ] 00:06:07.915 [2024-11-15 10:24:08.633914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.915 [2024-11-15 10:24:08.697388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.915 [2024-11-15 10:24:08.753291] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:08.175  [2024-11-15T10:24:09.028Z] Copying: 512/512 [B] (average 250 kBps) 00:06:08.175 00:06:08.175 10:24:08 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ eg3a3ztn1jckc7q45fa9s7apddl8n9de9306blxfftfpsm369keqgjuccxavrlw4xfrkbqbsebrzdnovkonguxh1hl5b7kzbgymve7h0bwrlgnddxzitjjo1yga47ag91x69qa70bygi0vqtx9581fp4okmjvy0zw7yrejbvz5ffteqk53tg2jb932doie2fl1ht2dn6v5lmnm9nrgsad74skwzsxb876hgwx800cpbyeqqrfsa2jubpteqvv8t8fv37lg9i4xq0sk4k7rhpaa5nlvyqu9g1q21a2enulbtvuhlwyjbf3zumcbmidmkx8c5bfp82ganpqh6pkh7mjaj6gp6pyjz75fzoc29qm95jx58gv5ma3kgxh6kutruq20t12iejxex9nne7mmnpnfq8mzpzmtcb1lvgpplwap7vcev0cfi9fycsavpbg7i1p33ti47p2wuck1fl538iuasyaspnsnieh5cbbzbs5cnuv57l1e7z4azqe0plesiv == \e\g\3\a\3\z\t\n\1\j\c\k\c\7\q\4\5\f\a\9\s\7\a\p\d\d\l\8\n\9\d\e\9\3\0\6\b\l\x\f\f\t\f\p\s\m\3\6\9\k\e\q\g\j\u\c\c\x\a\v\r\l\w\4\x\f\r\k\b\q\b\s\e\b\r\z\d\n\o\v\k\o\n\g\u\x\h\1\h\l\5\b\7\k\z\b\g\y\m\v\e\7\h\0\b\w\r\l\g\n\d\d\x\z\i\t\j\j\o\1\y\g\a\4\7\a\g\9\1\x\6\9\q\a\7\0\b\y\g\i\0\v\q\t\x\9\5\8\1\f\p\4\o\k\m\j\v\y\0\z\w\7\y\r\e\j\b\v\z\5\f\f\t\e\q\k\5\3\t\g\2\j\b\9\3\2\d\o\i\e\2\f\l\1\h\t\2\d\n\6\v\5\l\m\n\m\9\n\r\g\s\a\d\7\4\s\k\w\z\s\x\b\8\7\6\h\g\w\x\8\0\0\c\p\b\y\e\q\q\r\f\s\a\2\j\u\b\p\t\e\q\v\v\8\t\8\f\v\3\7\l\g\9\i\4\x\q\0\s\k\4\k\7\r\h\p\a\a\5\n\l\v\y\q\u\9\g\1\q\2\1\a\2\e\n\u\l\b\t\v\u\h\l\w\y\j\b\f\3\z\u\m\c\b\m\i\d\m\k\x\8\c\5\b\f\p\8\2\g\a\n\p\q\h\6\p\k\h\7\m\j\a\j\6\g\p\6\p\y\j\z\7\5\f\z\o\c\2\9\q\m\9\5\j\x\5\8\g\v\5\m\a\3\k\g\x\h\6\k\u\t\r\u\q\2\0\t\1\2\i\e\j\x\e\x\9\n\n\e\7\m\m\n\p\n\f\q\8\m\z\p\z\m\t\c\b\1\l\v\g\p\p\l\w\a\p\7\v\c\e\v\0\c\f\i\9\f\y\c\s\a\v\p\b\g\7\i\1\p\3\3\t\i\4\7\p\2\w\u\c\k\1\f\l\5\3\8\i\u\a\s\y\a\s\p\n\s\n\i\e\h\5\c\b\b\z\b\s\5\c\n\u\v\5\7\l\1\e\7\z\4\a\z\q\e\0\p\l\e\s\i\v ]] 00:06:08.175 00:06:08.175 real 0m4.348s 00:06:08.175 user 0m2.360s 00:06:08.175 sys 0m2.206s 00:06:08.175 ************************************ 00:06:08.175 END TEST dd_flags_misc 00:06:08.175 ************************************ 00:06:08.175 10:24:08 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:08.175 10:24:08 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:08.434 10:24:09 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:06:08.434 10:24:09 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:06:08.434 * Second test run, disabling liburing, forcing AIO 00:06:08.434 10:24:09 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:06:08.434 10:24:09 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:06:08.434 10:24:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:08.434 10:24:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:08.434 10:24:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:08.434 ************************************ 00:06:08.434 START TEST dd_flag_append_forced_aio 00:06:08.434 ************************************ 00:06:08.434 10:24:09 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1127 -- # append 00:06:08.434 10:24:09 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:06:08.434 10:24:09 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:06:08.434 10:24:09 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:06:08.434 10:24:09 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:08.434 10:24:09 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:08.434 10:24:09 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=7oqpppcqp1z40gzs4tr2vhgceo9wm3qs 00:06:08.434 10:24:09 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:06:08.434 10:24:09 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:08.434 10:24:09 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:08.434 10:24:09 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=1b41svhbxm8a0lshdieedub0wxfkd00i 00:06:08.434 10:24:09 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s 7oqpppcqp1z40gzs4tr2vhgceo9wm3qs 00:06:08.434 10:24:09 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s 1b41svhbxm8a0lshdieedub0wxfkd00i 00:06:08.434 10:24:09 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:08.434 [2024-11-15 10:24:09.106703] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:06:08.434 [2024-11-15 10:24:09.106836] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60339 ] 00:06:08.434 [2024-11-15 10:24:09.255864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.693 [2024-11-15 10:24:09.303638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.693 [2024-11-15 10:24:09.357375] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:08.693  [2024-11-15T10:24:09.806Z] Copying: 32/32 [B] (average 31 kBps) 00:06:08.953 00:06:08.953 10:24:09 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ 1b41svhbxm8a0lshdieedub0wxfkd00i7oqpppcqp1z40gzs4tr2vhgceo9wm3qs == \1\b\4\1\s\v\h\b\x\m\8\a\0\l\s\h\d\i\e\e\d\u\b\0\w\x\f\k\d\0\0\i\7\o\q\p\p\p\c\q\p\1\z\4\0\g\z\s\4\t\r\2\v\h\g\c\e\o\9\w\m\3\q\s ]] 00:06:08.953 00:06:08.953 real 0m0.569s 00:06:08.953 user 0m0.314s 00:06:08.953 sys 0m0.135s 00:06:08.953 ************************************ 00:06:08.953 END TEST dd_flag_append_forced_aio 00:06:08.953 ************************************ 00:06:08.954 10:24:09 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:08.954 10:24:09 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:08.954 10:24:09 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:06:08.954 10:24:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:08.954 10:24:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:08.954 10:24:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:08.954 ************************************ 00:06:08.954 START TEST dd_flag_directory_forced_aio 00:06:08.954 ************************************ 00:06:08.954 10:24:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1127 -- # directory 00:06:08.954 10:24:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:08.954 10:24:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:06:08.954 10:24:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:08.954 10:24:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:08.954 10:24:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:08.954 10:24:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:08.954 10:24:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:08.954 10:24:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:08.954 10:24:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:08.954 10:24:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:08.954 10:24:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:08.954 10:24:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:08.954 [2024-11-15 10:24:09.726921] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:06:08.954 [2024-11-15 10:24:09.727036] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60366 ] 00:06:09.213 [2024-11-15 10:24:09.873551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.213 [2024-11-15 10:24:09.926162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.213 [2024-11-15 10:24:09.980879] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:09.213 [2024-11-15 10:24:10.017725] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:09.213 [2024-11-15 10:24:10.017795] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:09.213 [2024-11-15 10:24:10.017813] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:09.471 [2024-11-15 10:24:10.138210] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:09.471 10:24:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:06:09.471 10:24:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:09.471 10:24:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:06:09.471 10:24:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:06:09.471 10:24:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:06:09.471 10:24:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:09.471 10:24:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:09.471 10:24:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:06:09.471 10:24:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:09.471 10:24:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:09.471 10:24:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.471 10:24:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:09.471 10:24:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.471 10:24:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:09.471 10:24:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.471 10:24:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:09.471 10:24:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:09.471 10:24:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:09.471 [2024-11-15 10:24:10.258760] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:06:09.471 [2024-11-15 10:24:10.258861] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60375 ] 00:06:09.731 [2024-11-15 10:24:10.406640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.731 [2024-11-15 10:24:10.459555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.731 [2024-11-15 10:24:10.517915] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:09.731 [2024-11-15 10:24:10.557253] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:09.731 [2024-11-15 10:24:10.557322] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:09.731 [2024-11-15 10:24:10.557357] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:09.990 [2024-11-15 10:24:10.678609] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:09.990 10:24:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:06:09.990 10:24:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:09.990 10:24:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:06:09.990 10:24:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:06:09.990 10:24:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:06:09.990 10:24:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:09.990 00:06:09.990 real 0m1.084s 00:06:09.990 user 0m0.578s 00:06:09.990 sys 0m0.297s 00:06:09.990 10:24:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:09.990 ************************************ 00:06:09.990 END TEST dd_flag_directory_forced_aio 00:06:09.990 ************************************ 00:06:09.990 10:24:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:09.990 10:24:10 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:06:09.990 10:24:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:09.990 10:24:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:09.990 10:24:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:09.990 ************************************ 00:06:09.990 START TEST dd_flag_nofollow_forced_aio 00:06:09.990 ************************************ 00:06:09.990 10:24:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1127 -- # nofollow 00:06:09.990 10:24:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:09.990 10:24:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:09.990 10:24:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:09.990 10:24:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:09.990 10:24:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:09.990 10:24:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:06:09.990 10:24:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:09.990 10:24:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:09.990 10:24:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.990 10:24:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:09.990 10:24:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.990 10:24:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:09.990 10:24:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.990 10:24:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:09.990 10:24:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:09.990 10:24:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:10.249 [2024-11-15 10:24:10.880151] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:06:10.249 [2024-11-15 10:24:10.880296] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60407 ] 00:06:10.249 [2024-11-15 10:24:11.028452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.249 [2024-11-15 10:24:11.087306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.508 [2024-11-15 10:24:11.144782] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:10.508 [2024-11-15 10:24:11.182898] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:10.508 [2024-11-15 10:24:11.182966] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:10.508 [2024-11-15 10:24:11.182987] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:10.508 [2024-11-15 10:24:11.304942] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:10.768 10:24:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:06:10.769 10:24:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:10.769 10:24:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:06:10.769 10:24:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:06:10.769 10:24:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:06:10.769 10:24:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:10.769 10:24:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:10.769 10:24:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:06:10.769 10:24:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:10.769 10:24:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:10.769 10:24:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:10.769 10:24:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:10.769 10:24:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:10.769 10:24:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:10.769 10:24:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:10.769 10:24:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:10.769 10:24:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:10.769 10:24:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:10.769 [2024-11-15 10:24:11.413236] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:06:10.769 [2024-11-15 10:24:11.413339] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60415 ] 00:06:10.769 [2024-11-15 10:24:11.552770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.769 [2024-11-15 10:24:11.613589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.027 [2024-11-15 10:24:11.665432] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:11.027 [2024-11-15 10:24:11.702496] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:11.027 [2024-11-15 10:24:11.702565] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:11.027 [2024-11-15 10:24:11.702600] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:11.027 [2024-11-15 10:24:11.818874] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:11.285 10:24:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:06:11.286 10:24:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:11.286 10:24:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:06:11.286 10:24:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:06:11.286 10:24:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:06:11.286 10:24:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:11.286 10:24:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:06:11.286 10:24:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:11.286 10:24:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:11.286 10:24:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:11.286 [2024-11-15 10:24:11.935970] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:06:11.286 [2024-11-15 10:24:11.936072] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60427 ] 00:06:11.286 [2024-11-15 10:24:12.073818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.286 [2024-11-15 10:24:12.136343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.544 [2024-11-15 10:24:12.195936] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:11.544  [2024-11-15T10:24:12.655Z] Copying: 512/512 [B] (average 500 kBps) 00:06:11.802 00:06:11.802 10:24:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ u7kz8xgwripdrxcljlobhyorrvwgki8en0iq9ra3v1rb13u9i5ysmos636ouvx0nuh9yzj9ag413sua2pchygx3d0goel9vdfode2tinlcuuix6gvj4rf2fgaes3fzuvqqh8mr0gqq7mi5i0jp3h3rtnnqjofzm5eq77x8ooxj36hdyqjzhgonjqj1jrym2clezsmbv4fsoojtc8grdhd1vkg2mf8rk7zlmiir8kkbiu68a6a48o96prh5swxop7pdanojh3yeaahsx1vio04qb93b4l5s5a7f9fipqz68l9h6u8nos3azzu2d30j6lcoi8610460lawwcg780ting7lf70trxvdai5l0yqg60fc80m6osd3q20el51ag9flipj2nrd6pd77y6diphm2zhqp48ujj5iwyt6g1qej6no0ej4v7ii95v9ckdcprx33u80578kxzlcsv3mplylfn5ynw4bu76l66m5fccn9yw2doqujo2rf8bleolg6aivp == \u\7\k\z\8\x\g\w\r\i\p\d\r\x\c\l\j\l\o\b\h\y\o\r\r\v\w\g\k\i\8\e\n\0\i\q\9\r\a\3\v\1\r\b\1\3\u\9\i\5\y\s\m\o\s\6\3\6\o\u\v\x\0\n\u\h\9\y\z\j\9\a\g\4\1\3\s\u\a\2\p\c\h\y\g\x\3\d\0\g\o\e\l\9\v\d\f\o\d\e\2\t\i\n\l\c\u\u\i\x\6\g\v\j\4\r\f\2\f\g\a\e\s\3\f\z\u\v\q\q\h\8\m\r\0\g\q\q\7\m\i\5\i\0\j\p\3\h\3\r\t\n\n\q\j\o\f\z\m\5\e\q\7\7\x\8\o\o\x\j\3\6\h\d\y\q\j\z\h\g\o\n\j\q\j\1\j\r\y\m\2\c\l\e\z\s\m\b\v\4\f\s\o\o\j\t\c\8\g\r\d\h\d\1\v\k\g\2\m\f\8\r\k\7\z\l\m\i\i\r\8\k\k\b\i\u\6\8\a\6\a\4\8\o\9\6\p\r\h\5\s\w\x\o\p\7\p\d\a\n\o\j\h\3\y\e\a\a\h\s\x\1\v\i\o\0\4\q\b\9\3\b\4\l\5\s\5\a\7\f\9\f\i\p\q\z\6\8\l\9\h\6\u\8\n\o\s\3\a\z\z\u\2\d\3\0\j\6\l\c\o\i\8\6\1\0\4\6\0\l\a\w\w\c\g\7\8\0\t\i\n\g\7\l\f\7\0\t\r\x\v\d\a\i\5\l\0\y\q\g\6\0\f\c\8\0\m\6\o\s\d\3\q\2\0\e\l\5\1\a\g\9\f\l\i\p\j\2\n\r\d\6\p\d\7\7\y\6\d\i\p\h\m\2\z\h\q\p\4\8\u\j\j\5\i\w\y\t\6\g\1\q\e\j\6\n\o\0\e\j\4\v\7\i\i\9\5\v\9\c\k\d\c\p\r\x\3\3\u\8\0\5\7\8\k\x\z\l\c\s\v\3\m\p\l\y\l\f\n\5\y\n\w\4\b\u\7\6\l\6\6\m\5\f\c\c\n\9\y\w\2\d\o\q\u\j\o\2\r\f\8\b\l\e\o\l\g\6\a\i\v\p ]] 00:06:11.802 00:06:11.802 real 0m1.637s 00:06:11.802 user 0m0.896s 00:06:11.802 sys 0m0.412s 00:06:11.802 10:24:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:11.802 10:24:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:11.802 ************************************ 00:06:11.802 END TEST dd_flag_nofollow_forced_aio 00:06:11.802 ************************************ 00:06:11.802 10:24:12 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:06:11.802 10:24:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:11.802 10:24:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:11.802 10:24:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:11.802 ************************************ 00:06:11.802 START TEST dd_flag_noatime_forced_aio 00:06:11.802 ************************************ 00:06:11.802 10:24:12 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1127 -- # noatime 00:06:11.802 10:24:12 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:06:11.802 10:24:12 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:06:11.802 10:24:12 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:06:11.802 10:24:12 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:11.802 10:24:12 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:11.802 10:24:12 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:11.802 10:24:12 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1731666252 00:06:11.802 10:24:12 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:11.802 10:24:12 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1731666252 00:06:11.802 10:24:12 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:06:12.737 10:24:13 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:12.737 [2024-11-15 10:24:13.581837] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:06:12.737 [2024-11-15 10:24:13.581940] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60464 ] 00:06:12.996 [2024-11-15 10:24:13.734926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.996 [2024-11-15 10:24:13.794731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.255 [2024-11-15 10:24:13.854529] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:13.255  [2024-11-15T10:24:14.367Z] Copying: 512/512 [B] (average 500 kBps) 00:06:13.514 00:06:13.514 10:24:14 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:13.514 10:24:14 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1731666252 )) 00:06:13.514 10:24:14 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:13.514 10:24:14 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1731666252 )) 00:06:13.514 10:24:14 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:13.514 [2024-11-15 10:24:14.185115] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:06:13.514 [2024-11-15 10:24:14.185224] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60480 ] 00:06:13.514 [2024-11-15 10:24:14.333230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.773 [2024-11-15 10:24:14.386230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.773 [2024-11-15 10:24:14.437774] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:13.773  [2024-11-15T10:24:14.887Z] Copying: 512/512 [B] (average 500 kBps) 00:06:14.034 00:06:14.034 10:24:14 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:14.034 10:24:14 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1731666254 )) 00:06:14.034 00:06:14.034 real 0m2.165s 00:06:14.034 user 0m0.605s 00:06:14.034 sys 0m0.309s 00:06:14.034 10:24:14 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:14.034 10:24:14 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:14.034 ************************************ 00:06:14.034 END TEST dd_flag_noatime_forced_aio 00:06:14.034 ************************************ 00:06:14.034 10:24:14 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:06:14.035 10:24:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:14.035 10:24:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:14.035 10:24:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:14.035 ************************************ 00:06:14.035 START TEST dd_flags_misc_forced_aio 00:06:14.035 ************************************ 00:06:14.035 10:24:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1127 -- # io 00:06:14.035 10:24:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:14.035 10:24:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:14.035 10:24:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:14.035 10:24:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:14.035 10:24:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:06:14.035 10:24:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:14.035 10:24:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:14.035 10:24:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:14.035 10:24:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:14.035 [2024-11-15 10:24:14.785110] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:06:14.035 [2024-11-15 10:24:14.785213] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60507 ] 00:06:14.294 [2024-11-15 10:24:14.930247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.294 [2024-11-15 10:24:14.982066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.294 [2024-11-15 10:24:15.037885] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:14.294  [2024-11-15T10:24:15.406Z] Copying: 512/512 [B] (average 500 kBps) 00:06:14.553 00:06:14.553 10:24:15 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ed1bnbk0767yrxrc5qjupg9bacapbadobcnex32esr7s7jkipf0jlo27t6mft74l15ag76ajn52n2msady5ehd4w7rhobg7m0jbakzhapa7nueumzleibus2v99eqqb9aoj75dkcfcyguii3l1cjnbl5kxikui22ossvdajg63qklqm9g5k6w5lougyv0c7mjgu7v1oj7jokdrw9ja8x1wezo4ui4evz0n0uqvkegq7dhc31p94uywwvjko1w1jmszeplaznaggmcs3lo4jzyv34m74lq95y7gqe0rzjq0p1oyondt2vpvchtwnxuhirwj62xb62nxks1xhyps3wewr0cie6x5h1euspyjdkwfnybk5cie6iatd6xvmprs169ngeooae5rkel0t89iedzq295d5hxdpt7g34u6xfzz98hv52ott0wop3pe88rtqnmsoq1fxm0ij0gp2ucs3ekwgecc6jlysqkktrkui3cuo4o0fret0uc7m80rrth49l == \e\d\1\b\n\b\k\0\7\6\7\y\r\x\r\c\5\q\j\u\p\g\9\b\a\c\a\p\b\a\d\o\b\c\n\e\x\3\2\e\s\r\7\s\7\j\k\i\p\f\0\j\l\o\2\7\t\6\m\f\t\7\4\l\1\5\a\g\7\6\a\j\n\5\2\n\2\m\s\a\d\y\5\e\h\d\4\w\7\r\h\o\b\g\7\m\0\j\b\a\k\z\h\a\p\a\7\n\u\e\u\m\z\l\e\i\b\u\s\2\v\9\9\e\q\q\b\9\a\o\j\7\5\d\k\c\f\c\y\g\u\i\i\3\l\1\c\j\n\b\l\5\k\x\i\k\u\i\2\2\o\s\s\v\d\a\j\g\6\3\q\k\l\q\m\9\g\5\k\6\w\5\l\o\u\g\y\v\0\c\7\m\j\g\u\7\v\1\o\j\7\j\o\k\d\r\w\9\j\a\8\x\1\w\e\z\o\4\u\i\4\e\v\z\0\n\0\u\q\v\k\e\g\q\7\d\h\c\3\1\p\9\4\u\y\w\w\v\j\k\o\1\w\1\j\m\s\z\e\p\l\a\z\n\a\g\g\m\c\s\3\l\o\4\j\z\y\v\3\4\m\7\4\l\q\9\5\y\7\g\q\e\0\r\z\j\q\0\p\1\o\y\o\n\d\t\2\v\p\v\c\h\t\w\n\x\u\h\i\r\w\j\6\2\x\b\6\2\n\x\k\s\1\x\h\y\p\s\3\w\e\w\r\0\c\i\e\6\x\5\h\1\e\u\s\p\y\j\d\k\w\f\n\y\b\k\5\c\i\e\6\i\a\t\d\6\x\v\m\p\r\s\1\6\9\n\g\e\o\o\a\e\5\r\k\e\l\0\t\8\9\i\e\d\z\q\2\9\5\d\5\h\x\d\p\t\7\g\3\4\u\6\x\f\z\z\9\8\h\v\5\2\o\t\t\0\w\o\p\3\p\e\8\8\r\t\q\n\m\s\o\q\1\f\x\m\0\i\j\0\g\p\2\u\c\s\3\e\k\w\g\e\c\c\6\j\l\y\s\q\k\k\t\r\k\u\i\3\c\u\o\4\o\0\f\r\e\t\0\u\c\7\m\8\0\r\r\t\h\4\9\l ]] 00:06:14.553 10:24:15 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:14.553 10:24:15 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:14.553 [2024-11-15 10:24:15.337260] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:06:14.553 [2024-11-15 10:24:15.337373] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60514 ] 00:06:14.812 [2024-11-15 10:24:15.482712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.812 [2024-11-15 10:24:15.528358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.812 [2024-11-15 10:24:15.580243] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:14.812  [2024-11-15T10:24:15.924Z] Copying: 512/512 [B] (average 500 kBps) 00:06:15.071 00:06:15.071 10:24:15 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ed1bnbk0767yrxrc5qjupg9bacapbadobcnex32esr7s7jkipf0jlo27t6mft74l15ag76ajn52n2msady5ehd4w7rhobg7m0jbakzhapa7nueumzleibus2v99eqqb9aoj75dkcfcyguii3l1cjnbl5kxikui22ossvdajg63qklqm9g5k6w5lougyv0c7mjgu7v1oj7jokdrw9ja8x1wezo4ui4evz0n0uqvkegq7dhc31p94uywwvjko1w1jmszeplaznaggmcs3lo4jzyv34m74lq95y7gqe0rzjq0p1oyondt2vpvchtwnxuhirwj62xb62nxks1xhyps3wewr0cie6x5h1euspyjdkwfnybk5cie6iatd6xvmprs169ngeooae5rkel0t89iedzq295d5hxdpt7g34u6xfzz98hv52ott0wop3pe88rtqnmsoq1fxm0ij0gp2ucs3ekwgecc6jlysqkktrkui3cuo4o0fret0uc7m80rrth49l == \e\d\1\b\n\b\k\0\7\6\7\y\r\x\r\c\5\q\j\u\p\g\9\b\a\c\a\p\b\a\d\o\b\c\n\e\x\3\2\e\s\r\7\s\7\j\k\i\p\f\0\j\l\o\2\7\t\6\m\f\t\7\4\l\1\5\a\g\7\6\a\j\n\5\2\n\2\m\s\a\d\y\5\e\h\d\4\w\7\r\h\o\b\g\7\m\0\j\b\a\k\z\h\a\p\a\7\n\u\e\u\m\z\l\e\i\b\u\s\2\v\9\9\e\q\q\b\9\a\o\j\7\5\d\k\c\f\c\y\g\u\i\i\3\l\1\c\j\n\b\l\5\k\x\i\k\u\i\2\2\o\s\s\v\d\a\j\g\6\3\q\k\l\q\m\9\g\5\k\6\w\5\l\o\u\g\y\v\0\c\7\m\j\g\u\7\v\1\o\j\7\j\o\k\d\r\w\9\j\a\8\x\1\w\e\z\o\4\u\i\4\e\v\z\0\n\0\u\q\v\k\e\g\q\7\d\h\c\3\1\p\9\4\u\y\w\w\v\j\k\o\1\w\1\j\m\s\z\e\p\l\a\z\n\a\g\g\m\c\s\3\l\o\4\j\z\y\v\3\4\m\7\4\l\q\9\5\y\7\g\q\e\0\r\z\j\q\0\p\1\o\y\o\n\d\t\2\v\p\v\c\h\t\w\n\x\u\h\i\r\w\j\6\2\x\b\6\2\n\x\k\s\1\x\h\y\p\s\3\w\e\w\r\0\c\i\e\6\x\5\h\1\e\u\s\p\y\j\d\k\w\f\n\y\b\k\5\c\i\e\6\i\a\t\d\6\x\v\m\p\r\s\1\6\9\n\g\e\o\o\a\e\5\r\k\e\l\0\t\8\9\i\e\d\z\q\2\9\5\d\5\h\x\d\p\t\7\g\3\4\u\6\x\f\z\z\9\8\h\v\5\2\o\t\t\0\w\o\p\3\p\e\8\8\r\t\q\n\m\s\o\q\1\f\x\m\0\i\j\0\g\p\2\u\c\s\3\e\k\w\g\e\c\c\6\j\l\y\s\q\k\k\t\r\k\u\i\3\c\u\o\4\o\0\f\r\e\t\0\u\c\7\m\8\0\r\r\t\h\4\9\l ]] 00:06:15.071 10:24:15 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:15.071 10:24:15 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:15.071 [2024-11-15 10:24:15.871573] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:06:15.071 [2024-11-15 10:24:15.871681] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60522 ] 00:06:15.330 [2024-11-15 10:24:16.017921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.330 [2024-11-15 10:24:16.066914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.330 [2024-11-15 10:24:16.122686] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:15.330  [2024-11-15T10:24:16.442Z] Copying: 512/512 [B] (average 166 kBps) 00:06:15.589 00:06:15.589 10:24:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ed1bnbk0767yrxrc5qjupg9bacapbadobcnex32esr7s7jkipf0jlo27t6mft74l15ag76ajn52n2msady5ehd4w7rhobg7m0jbakzhapa7nueumzleibus2v99eqqb9aoj75dkcfcyguii3l1cjnbl5kxikui22ossvdajg63qklqm9g5k6w5lougyv0c7mjgu7v1oj7jokdrw9ja8x1wezo4ui4evz0n0uqvkegq7dhc31p94uywwvjko1w1jmszeplaznaggmcs3lo4jzyv34m74lq95y7gqe0rzjq0p1oyondt2vpvchtwnxuhirwj62xb62nxks1xhyps3wewr0cie6x5h1euspyjdkwfnybk5cie6iatd6xvmprs169ngeooae5rkel0t89iedzq295d5hxdpt7g34u6xfzz98hv52ott0wop3pe88rtqnmsoq1fxm0ij0gp2ucs3ekwgecc6jlysqkktrkui3cuo4o0fret0uc7m80rrth49l == \e\d\1\b\n\b\k\0\7\6\7\y\r\x\r\c\5\q\j\u\p\g\9\b\a\c\a\p\b\a\d\o\b\c\n\e\x\3\2\e\s\r\7\s\7\j\k\i\p\f\0\j\l\o\2\7\t\6\m\f\t\7\4\l\1\5\a\g\7\6\a\j\n\5\2\n\2\m\s\a\d\y\5\e\h\d\4\w\7\r\h\o\b\g\7\m\0\j\b\a\k\z\h\a\p\a\7\n\u\e\u\m\z\l\e\i\b\u\s\2\v\9\9\e\q\q\b\9\a\o\j\7\5\d\k\c\f\c\y\g\u\i\i\3\l\1\c\j\n\b\l\5\k\x\i\k\u\i\2\2\o\s\s\v\d\a\j\g\6\3\q\k\l\q\m\9\g\5\k\6\w\5\l\o\u\g\y\v\0\c\7\m\j\g\u\7\v\1\o\j\7\j\o\k\d\r\w\9\j\a\8\x\1\w\e\z\o\4\u\i\4\e\v\z\0\n\0\u\q\v\k\e\g\q\7\d\h\c\3\1\p\9\4\u\y\w\w\v\j\k\o\1\w\1\j\m\s\z\e\p\l\a\z\n\a\g\g\m\c\s\3\l\o\4\j\z\y\v\3\4\m\7\4\l\q\9\5\y\7\g\q\e\0\r\z\j\q\0\p\1\o\y\o\n\d\t\2\v\p\v\c\h\t\w\n\x\u\h\i\r\w\j\6\2\x\b\6\2\n\x\k\s\1\x\h\y\p\s\3\w\e\w\r\0\c\i\e\6\x\5\h\1\e\u\s\p\y\j\d\k\w\f\n\y\b\k\5\c\i\e\6\i\a\t\d\6\x\v\m\p\r\s\1\6\9\n\g\e\o\o\a\e\5\r\k\e\l\0\t\8\9\i\e\d\z\q\2\9\5\d\5\h\x\d\p\t\7\g\3\4\u\6\x\f\z\z\9\8\h\v\5\2\o\t\t\0\w\o\p\3\p\e\8\8\r\t\q\n\m\s\o\q\1\f\x\m\0\i\j\0\g\p\2\u\c\s\3\e\k\w\g\e\c\c\6\j\l\y\s\q\k\k\t\r\k\u\i\3\c\u\o\4\o\0\f\r\e\t\0\u\c\7\m\8\0\r\r\t\h\4\9\l ]] 00:06:15.589 10:24:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:15.589 10:24:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:15.589 [2024-11-15 10:24:16.410099] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:06:15.589 [2024-11-15 10:24:16.410203] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60529 ] 00:06:15.848 [2024-11-15 10:24:16.558640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.848 [2024-11-15 10:24:16.617299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.848 [2024-11-15 10:24:16.673762] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:16.108  [2024-11-15T10:24:16.961Z] Copying: 512/512 [B] (average 500 kBps) 00:06:16.108 00:06:16.108 10:24:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ed1bnbk0767yrxrc5qjupg9bacapbadobcnex32esr7s7jkipf0jlo27t6mft74l15ag76ajn52n2msady5ehd4w7rhobg7m0jbakzhapa7nueumzleibus2v99eqqb9aoj75dkcfcyguii3l1cjnbl5kxikui22ossvdajg63qklqm9g5k6w5lougyv0c7mjgu7v1oj7jokdrw9ja8x1wezo4ui4evz0n0uqvkegq7dhc31p94uywwvjko1w1jmszeplaznaggmcs3lo4jzyv34m74lq95y7gqe0rzjq0p1oyondt2vpvchtwnxuhirwj62xb62nxks1xhyps3wewr0cie6x5h1euspyjdkwfnybk5cie6iatd6xvmprs169ngeooae5rkel0t89iedzq295d5hxdpt7g34u6xfzz98hv52ott0wop3pe88rtqnmsoq1fxm0ij0gp2ucs3ekwgecc6jlysqkktrkui3cuo4o0fret0uc7m80rrth49l == \e\d\1\b\n\b\k\0\7\6\7\y\r\x\r\c\5\q\j\u\p\g\9\b\a\c\a\p\b\a\d\o\b\c\n\e\x\3\2\e\s\r\7\s\7\j\k\i\p\f\0\j\l\o\2\7\t\6\m\f\t\7\4\l\1\5\a\g\7\6\a\j\n\5\2\n\2\m\s\a\d\y\5\e\h\d\4\w\7\r\h\o\b\g\7\m\0\j\b\a\k\z\h\a\p\a\7\n\u\e\u\m\z\l\e\i\b\u\s\2\v\9\9\e\q\q\b\9\a\o\j\7\5\d\k\c\f\c\y\g\u\i\i\3\l\1\c\j\n\b\l\5\k\x\i\k\u\i\2\2\o\s\s\v\d\a\j\g\6\3\q\k\l\q\m\9\g\5\k\6\w\5\l\o\u\g\y\v\0\c\7\m\j\g\u\7\v\1\o\j\7\j\o\k\d\r\w\9\j\a\8\x\1\w\e\z\o\4\u\i\4\e\v\z\0\n\0\u\q\v\k\e\g\q\7\d\h\c\3\1\p\9\4\u\y\w\w\v\j\k\o\1\w\1\j\m\s\z\e\p\l\a\z\n\a\g\g\m\c\s\3\l\o\4\j\z\y\v\3\4\m\7\4\l\q\9\5\y\7\g\q\e\0\r\z\j\q\0\p\1\o\y\o\n\d\t\2\v\p\v\c\h\t\w\n\x\u\h\i\r\w\j\6\2\x\b\6\2\n\x\k\s\1\x\h\y\p\s\3\w\e\w\r\0\c\i\e\6\x\5\h\1\e\u\s\p\y\j\d\k\w\f\n\y\b\k\5\c\i\e\6\i\a\t\d\6\x\v\m\p\r\s\1\6\9\n\g\e\o\o\a\e\5\r\k\e\l\0\t\8\9\i\e\d\z\q\2\9\5\d\5\h\x\d\p\t\7\g\3\4\u\6\x\f\z\z\9\8\h\v\5\2\o\t\t\0\w\o\p\3\p\e\8\8\r\t\q\n\m\s\o\q\1\f\x\m\0\i\j\0\g\p\2\u\c\s\3\e\k\w\g\e\c\c\6\j\l\y\s\q\k\k\t\r\k\u\i\3\c\u\o\4\o\0\f\r\e\t\0\u\c\7\m\8\0\r\r\t\h\4\9\l ]] 00:06:16.108 10:24:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:16.108 10:24:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:06:16.108 10:24:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:16.108 10:24:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:16.108 10:24:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:16.108 10:24:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:16.367 [2024-11-15 10:24:16.985352] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:06:16.367 [2024-11-15 10:24:16.985482] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60537 ] 00:06:16.367 [2024-11-15 10:24:17.134888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.367 [2024-11-15 10:24:17.194033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.626 [2024-11-15 10:24:17.247353] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:16.626  [2024-11-15T10:24:17.479Z] Copying: 512/512 [B] (average 500 kBps) 00:06:16.626 00:06:16.886 10:24:17 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 80zfic6menidcoqqr790i3n51ntwdccr0yz7p5ll6ye5tc46cykq43s4yzsso3ux67o3zrc4mjt3qu8g52ro6o2hye3ckfqd7yjy13y6u71403hgmfcl1zdg0gedycf2d2ucpkridork6lz4j5m3mdthnlgs7tg4jof6bqqlt6desrsdsejq65vts4ub0c1a5ke13jfhddmpdgh1sr8nq6lunwj17a275k4q5iunfmtutkxz44xd7kd4j6laqmk4gkmvjuwovvvyajv7fer2lkiikjw8ec7kjaym6w5rl7z19547bn2jsrqa8azr3bog26xy9ukh5i93zd5gmjgk6schibfovtxt3nwtb9asxximb4thc4h91b4d8ywoizqgweuw4j18kufdqwd1gsmtm29qi3h50mdfwudygnpnw48iybslujrt1vbkrjecfha2mtv2tz979744jwhtsu0ah0wn2iug4z88u3z7jb5jaazkpihehj1rs5qt86up70f9 == \8\0\z\f\i\c\6\m\e\n\i\d\c\o\q\q\r\7\9\0\i\3\n\5\1\n\t\w\d\c\c\r\0\y\z\7\p\5\l\l\6\y\e\5\t\c\4\6\c\y\k\q\4\3\s\4\y\z\s\s\o\3\u\x\6\7\o\3\z\r\c\4\m\j\t\3\q\u\8\g\5\2\r\o\6\o\2\h\y\e\3\c\k\f\q\d\7\y\j\y\1\3\y\6\u\7\1\4\0\3\h\g\m\f\c\l\1\z\d\g\0\g\e\d\y\c\f\2\d\2\u\c\p\k\r\i\d\o\r\k\6\l\z\4\j\5\m\3\m\d\t\h\n\l\g\s\7\t\g\4\j\o\f\6\b\q\q\l\t\6\d\e\s\r\s\d\s\e\j\q\6\5\v\t\s\4\u\b\0\c\1\a\5\k\e\1\3\j\f\h\d\d\m\p\d\g\h\1\s\r\8\n\q\6\l\u\n\w\j\1\7\a\2\7\5\k\4\q\5\i\u\n\f\m\t\u\t\k\x\z\4\4\x\d\7\k\d\4\j\6\l\a\q\m\k\4\g\k\m\v\j\u\w\o\v\v\v\y\a\j\v\7\f\e\r\2\l\k\i\i\k\j\w\8\e\c\7\k\j\a\y\m\6\w\5\r\l\7\z\1\9\5\4\7\b\n\2\j\s\r\q\a\8\a\z\r\3\b\o\g\2\6\x\y\9\u\k\h\5\i\9\3\z\d\5\g\m\j\g\k\6\s\c\h\i\b\f\o\v\t\x\t\3\n\w\t\b\9\a\s\x\x\i\m\b\4\t\h\c\4\h\9\1\b\4\d\8\y\w\o\i\z\q\g\w\e\u\w\4\j\1\8\k\u\f\d\q\w\d\1\g\s\m\t\m\2\9\q\i\3\h\5\0\m\d\f\w\u\d\y\g\n\p\n\w\4\8\i\y\b\s\l\u\j\r\t\1\v\b\k\r\j\e\c\f\h\a\2\m\t\v\2\t\z\9\7\9\7\4\4\j\w\h\t\s\u\0\a\h\0\w\n\2\i\u\g\4\z\8\8\u\3\z\7\j\b\5\j\a\a\z\k\p\i\h\e\h\j\1\r\s\5\q\t\8\6\u\p\7\0\f\9 ]] 00:06:16.886 10:24:17 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:16.886 10:24:17 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:16.886 [2024-11-15 10:24:17.536536] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:06:16.886 [2024-11-15 10:24:17.536657] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60544 ] 00:06:16.886 [2024-11-15 10:24:17.685523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.145 [2024-11-15 10:24:17.750293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.145 [2024-11-15 10:24:17.806291] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:17.145  [2024-11-15T10:24:18.257Z] Copying: 512/512 [B] (average 500 kBps) 00:06:17.404 00:06:17.404 10:24:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 80zfic6menidcoqqr790i3n51ntwdccr0yz7p5ll6ye5tc46cykq43s4yzsso3ux67o3zrc4mjt3qu8g52ro6o2hye3ckfqd7yjy13y6u71403hgmfcl1zdg0gedycf2d2ucpkridork6lz4j5m3mdthnlgs7tg4jof6bqqlt6desrsdsejq65vts4ub0c1a5ke13jfhddmpdgh1sr8nq6lunwj17a275k4q5iunfmtutkxz44xd7kd4j6laqmk4gkmvjuwovvvyajv7fer2lkiikjw8ec7kjaym6w5rl7z19547bn2jsrqa8azr3bog26xy9ukh5i93zd5gmjgk6schibfovtxt3nwtb9asxximb4thc4h91b4d8ywoizqgweuw4j18kufdqwd1gsmtm29qi3h50mdfwudygnpnw48iybslujrt1vbkrjecfha2mtv2tz979744jwhtsu0ah0wn2iug4z88u3z7jb5jaazkpihehj1rs5qt86up70f9 == \8\0\z\f\i\c\6\m\e\n\i\d\c\o\q\q\r\7\9\0\i\3\n\5\1\n\t\w\d\c\c\r\0\y\z\7\p\5\l\l\6\y\e\5\t\c\4\6\c\y\k\q\4\3\s\4\y\z\s\s\o\3\u\x\6\7\o\3\z\r\c\4\m\j\t\3\q\u\8\g\5\2\r\o\6\o\2\h\y\e\3\c\k\f\q\d\7\y\j\y\1\3\y\6\u\7\1\4\0\3\h\g\m\f\c\l\1\z\d\g\0\g\e\d\y\c\f\2\d\2\u\c\p\k\r\i\d\o\r\k\6\l\z\4\j\5\m\3\m\d\t\h\n\l\g\s\7\t\g\4\j\o\f\6\b\q\q\l\t\6\d\e\s\r\s\d\s\e\j\q\6\5\v\t\s\4\u\b\0\c\1\a\5\k\e\1\3\j\f\h\d\d\m\p\d\g\h\1\s\r\8\n\q\6\l\u\n\w\j\1\7\a\2\7\5\k\4\q\5\i\u\n\f\m\t\u\t\k\x\z\4\4\x\d\7\k\d\4\j\6\l\a\q\m\k\4\g\k\m\v\j\u\w\o\v\v\v\y\a\j\v\7\f\e\r\2\l\k\i\i\k\j\w\8\e\c\7\k\j\a\y\m\6\w\5\r\l\7\z\1\9\5\4\7\b\n\2\j\s\r\q\a\8\a\z\r\3\b\o\g\2\6\x\y\9\u\k\h\5\i\9\3\z\d\5\g\m\j\g\k\6\s\c\h\i\b\f\o\v\t\x\t\3\n\w\t\b\9\a\s\x\x\i\m\b\4\t\h\c\4\h\9\1\b\4\d\8\y\w\o\i\z\q\g\w\e\u\w\4\j\1\8\k\u\f\d\q\w\d\1\g\s\m\t\m\2\9\q\i\3\h\5\0\m\d\f\w\u\d\y\g\n\p\n\w\4\8\i\y\b\s\l\u\j\r\t\1\v\b\k\r\j\e\c\f\h\a\2\m\t\v\2\t\z\9\7\9\7\4\4\j\w\h\t\s\u\0\a\h\0\w\n\2\i\u\g\4\z\8\8\u\3\z\7\j\b\5\j\a\a\z\k\p\i\h\e\h\j\1\r\s\5\q\t\8\6\u\p\7\0\f\9 ]] 00:06:17.404 10:24:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:17.404 10:24:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:17.404 [2024-11-15 10:24:18.102043] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:06:17.404 [2024-11-15 10:24:18.102177] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60552 ] 00:06:17.404 [2024-11-15 10:24:18.243673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.663 [2024-11-15 10:24:18.291834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.663 [2024-11-15 10:24:18.347182] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:17.663  [2024-11-15T10:24:18.774Z] Copying: 512/512 [B] (average 250 kBps) 00:06:17.921 00:06:17.921 10:24:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 80zfic6menidcoqqr790i3n51ntwdccr0yz7p5ll6ye5tc46cykq43s4yzsso3ux67o3zrc4mjt3qu8g52ro6o2hye3ckfqd7yjy13y6u71403hgmfcl1zdg0gedycf2d2ucpkridork6lz4j5m3mdthnlgs7tg4jof6bqqlt6desrsdsejq65vts4ub0c1a5ke13jfhddmpdgh1sr8nq6lunwj17a275k4q5iunfmtutkxz44xd7kd4j6laqmk4gkmvjuwovvvyajv7fer2lkiikjw8ec7kjaym6w5rl7z19547bn2jsrqa8azr3bog26xy9ukh5i93zd5gmjgk6schibfovtxt3nwtb9asxximb4thc4h91b4d8ywoizqgweuw4j18kufdqwd1gsmtm29qi3h50mdfwudygnpnw48iybslujrt1vbkrjecfha2mtv2tz979744jwhtsu0ah0wn2iug4z88u3z7jb5jaazkpihehj1rs5qt86up70f9 == \8\0\z\f\i\c\6\m\e\n\i\d\c\o\q\q\r\7\9\0\i\3\n\5\1\n\t\w\d\c\c\r\0\y\z\7\p\5\l\l\6\y\e\5\t\c\4\6\c\y\k\q\4\3\s\4\y\z\s\s\o\3\u\x\6\7\o\3\z\r\c\4\m\j\t\3\q\u\8\g\5\2\r\o\6\o\2\h\y\e\3\c\k\f\q\d\7\y\j\y\1\3\y\6\u\7\1\4\0\3\h\g\m\f\c\l\1\z\d\g\0\g\e\d\y\c\f\2\d\2\u\c\p\k\r\i\d\o\r\k\6\l\z\4\j\5\m\3\m\d\t\h\n\l\g\s\7\t\g\4\j\o\f\6\b\q\q\l\t\6\d\e\s\r\s\d\s\e\j\q\6\5\v\t\s\4\u\b\0\c\1\a\5\k\e\1\3\j\f\h\d\d\m\p\d\g\h\1\s\r\8\n\q\6\l\u\n\w\j\1\7\a\2\7\5\k\4\q\5\i\u\n\f\m\t\u\t\k\x\z\4\4\x\d\7\k\d\4\j\6\l\a\q\m\k\4\g\k\m\v\j\u\w\o\v\v\v\y\a\j\v\7\f\e\r\2\l\k\i\i\k\j\w\8\e\c\7\k\j\a\y\m\6\w\5\r\l\7\z\1\9\5\4\7\b\n\2\j\s\r\q\a\8\a\z\r\3\b\o\g\2\6\x\y\9\u\k\h\5\i\9\3\z\d\5\g\m\j\g\k\6\s\c\h\i\b\f\o\v\t\x\t\3\n\w\t\b\9\a\s\x\x\i\m\b\4\t\h\c\4\h\9\1\b\4\d\8\y\w\o\i\z\q\g\w\e\u\w\4\j\1\8\k\u\f\d\q\w\d\1\g\s\m\t\m\2\9\q\i\3\h\5\0\m\d\f\w\u\d\y\g\n\p\n\w\4\8\i\y\b\s\l\u\j\r\t\1\v\b\k\r\j\e\c\f\h\a\2\m\t\v\2\t\z\9\7\9\7\4\4\j\w\h\t\s\u\0\a\h\0\w\n\2\i\u\g\4\z\8\8\u\3\z\7\j\b\5\j\a\a\z\k\p\i\h\e\h\j\1\r\s\5\q\t\8\6\u\p\7\0\f\9 ]] 00:06:17.921 10:24:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:17.921 10:24:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:17.921 [2024-11-15 10:24:18.632618] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:06:17.921 [2024-11-15 10:24:18.632722] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60559 ] 00:06:18.180 [2024-11-15 10:24:18.777623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.180 [2024-11-15 10:24:18.833580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.180 [2024-11-15 10:24:18.885896] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:18.180  [2024-11-15T10:24:19.292Z] Copying: 512/512 [B] (average 500 kBps) 00:06:18.439 00:06:18.439 10:24:19 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 80zfic6menidcoqqr790i3n51ntwdccr0yz7p5ll6ye5tc46cykq43s4yzsso3ux67o3zrc4mjt3qu8g52ro6o2hye3ckfqd7yjy13y6u71403hgmfcl1zdg0gedycf2d2ucpkridork6lz4j5m3mdthnlgs7tg4jof6bqqlt6desrsdsejq65vts4ub0c1a5ke13jfhddmpdgh1sr8nq6lunwj17a275k4q5iunfmtutkxz44xd7kd4j6laqmk4gkmvjuwovvvyajv7fer2lkiikjw8ec7kjaym6w5rl7z19547bn2jsrqa8azr3bog26xy9ukh5i93zd5gmjgk6schibfovtxt3nwtb9asxximb4thc4h91b4d8ywoizqgweuw4j18kufdqwd1gsmtm29qi3h50mdfwudygnpnw48iybslujrt1vbkrjecfha2mtv2tz979744jwhtsu0ah0wn2iug4z88u3z7jb5jaazkpihehj1rs5qt86up70f9 == \8\0\z\f\i\c\6\m\e\n\i\d\c\o\q\q\r\7\9\0\i\3\n\5\1\n\t\w\d\c\c\r\0\y\z\7\p\5\l\l\6\y\e\5\t\c\4\6\c\y\k\q\4\3\s\4\y\z\s\s\o\3\u\x\6\7\o\3\z\r\c\4\m\j\t\3\q\u\8\g\5\2\r\o\6\o\2\h\y\e\3\c\k\f\q\d\7\y\j\y\1\3\y\6\u\7\1\4\0\3\h\g\m\f\c\l\1\z\d\g\0\g\e\d\y\c\f\2\d\2\u\c\p\k\r\i\d\o\r\k\6\l\z\4\j\5\m\3\m\d\t\h\n\l\g\s\7\t\g\4\j\o\f\6\b\q\q\l\t\6\d\e\s\r\s\d\s\e\j\q\6\5\v\t\s\4\u\b\0\c\1\a\5\k\e\1\3\j\f\h\d\d\m\p\d\g\h\1\s\r\8\n\q\6\l\u\n\w\j\1\7\a\2\7\5\k\4\q\5\i\u\n\f\m\t\u\t\k\x\z\4\4\x\d\7\k\d\4\j\6\l\a\q\m\k\4\g\k\m\v\j\u\w\o\v\v\v\y\a\j\v\7\f\e\r\2\l\k\i\i\k\j\w\8\e\c\7\k\j\a\y\m\6\w\5\r\l\7\z\1\9\5\4\7\b\n\2\j\s\r\q\a\8\a\z\r\3\b\o\g\2\6\x\y\9\u\k\h\5\i\9\3\z\d\5\g\m\j\g\k\6\s\c\h\i\b\f\o\v\t\x\t\3\n\w\t\b\9\a\s\x\x\i\m\b\4\t\h\c\4\h\9\1\b\4\d\8\y\w\o\i\z\q\g\w\e\u\w\4\j\1\8\k\u\f\d\q\w\d\1\g\s\m\t\m\2\9\q\i\3\h\5\0\m\d\f\w\u\d\y\g\n\p\n\w\4\8\i\y\b\s\l\u\j\r\t\1\v\b\k\r\j\e\c\f\h\a\2\m\t\v\2\t\z\9\7\9\7\4\4\j\w\h\t\s\u\0\a\h\0\w\n\2\i\u\g\4\z\8\8\u\3\z\7\j\b\5\j\a\a\z\k\p\i\h\e\h\j\1\r\s\5\q\t\8\6\u\p\7\0\f\9 ]] 00:06:18.439 00:06:18.439 real 0m4.408s 00:06:18.439 user 0m2.358s 00:06:18.439 sys 0m1.084s 00:06:18.439 10:24:19 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:18.439 10:24:19 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:18.439 ************************************ 00:06:18.439 END TEST dd_flags_misc_forced_aio 00:06:18.439 ************************************ 00:06:18.439 10:24:19 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:06:18.439 10:24:19 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:18.439 10:24:19 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:18.439 ************************************ 00:06:18.439 END TEST spdk_dd_posix 00:06:18.439 ************************************ 00:06:18.439 00:06:18.439 real 0m20.232s 00:06:18.439 user 0m9.645s 00:06:18.439 sys 0m6.567s 00:06:18.439 10:24:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:18.439 10:24:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:18.439 10:24:19 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:18.439 10:24:19 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:18.439 10:24:19 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:18.439 10:24:19 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:18.439 ************************************ 00:06:18.439 START TEST spdk_dd_malloc 00:06:18.439 ************************************ 00:06:18.439 10:24:19 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:18.699 * Looking for test storage... 00:06:18.699 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:18.699 10:24:19 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:18.699 10:24:19 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1691 -- # lcov --version 00:06:18.699 10:24:19 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:18.699 10:24:19 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:18.699 10:24:19 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:18.699 10:24:19 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:18.699 10:24:19 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:18.699 10:24:19 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:06:18.699 10:24:19 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:06:18.699 10:24:19 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:06:18.699 10:24:19 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:06:18.699 10:24:19 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:06:18.699 10:24:19 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:06:18.699 10:24:19 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:06:18.699 10:24:19 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:18.699 10:24:19 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:06:18.699 10:24:19 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:06:18.699 10:24:19 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:18.699 10:24:19 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:18.699 10:24:19 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:06:18.699 10:24:19 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:06:18.699 10:24:19 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:18.699 10:24:19 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:06:18.699 10:24:19 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:18.699 10:24:19 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:06:18.699 10:24:19 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:06:18.699 10:24:19 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:18.699 10:24:19 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:06:18.699 10:24:19 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:18.699 10:24:19 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:18.699 10:24:19 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:18.699 10:24:19 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:06:18.699 10:24:19 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:18.699 10:24:19 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:18.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.699 --rc genhtml_branch_coverage=1 00:06:18.699 --rc genhtml_function_coverage=1 00:06:18.699 --rc genhtml_legend=1 00:06:18.699 --rc geninfo_all_blocks=1 00:06:18.699 --rc geninfo_unexecuted_blocks=1 00:06:18.699 00:06:18.699 ' 00:06:18.699 10:24:19 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:18.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.699 --rc genhtml_branch_coverage=1 00:06:18.699 --rc genhtml_function_coverage=1 00:06:18.699 --rc genhtml_legend=1 00:06:18.699 --rc geninfo_all_blocks=1 00:06:18.699 --rc geninfo_unexecuted_blocks=1 00:06:18.699 00:06:18.699 ' 00:06:18.699 10:24:19 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:18.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.699 --rc genhtml_branch_coverage=1 00:06:18.699 --rc genhtml_function_coverage=1 00:06:18.699 --rc genhtml_legend=1 00:06:18.699 --rc geninfo_all_blocks=1 00:06:18.699 --rc geninfo_unexecuted_blocks=1 00:06:18.699 00:06:18.699 ' 00:06:18.699 10:24:19 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:18.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.699 --rc genhtml_branch_coverage=1 00:06:18.699 --rc genhtml_function_coverage=1 00:06:18.699 --rc genhtml_legend=1 00:06:18.699 --rc geninfo_all_blocks=1 00:06:18.699 --rc geninfo_unexecuted_blocks=1 00:06:18.699 00:06:18.699 ' 00:06:18.699 10:24:19 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:18.699 10:24:19 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:06:18.699 10:24:19 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:18.699 10:24:19 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:18.699 10:24:19 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:18.699 10:24:19 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.699 10:24:19 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.699 10:24:19 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.699 10:24:19 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:06:18.699 10:24:19 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.699 10:24:19 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:06:18.699 10:24:19 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:18.699 10:24:19 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:18.699 10:24:19 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:06:18.699 ************************************ 00:06:18.699 START TEST dd_malloc_copy 00:06:18.699 ************************************ 00:06:18.699 10:24:19 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1127 -- # malloc_copy 00:06:18.699 10:24:19 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:06:18.699 10:24:19 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:06:18.699 10:24:19 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:06:18.699 10:24:19 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:06:18.699 10:24:19 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:06:18.699 10:24:19 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:06:18.699 10:24:19 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:06:18.699 10:24:19 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:06:18.699 10:24:19 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:18.699 10:24:19 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:18.699 [2024-11-15 10:24:19.482124] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:06:18.699 [2024-11-15 10:24:19.482716] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60641 ] 00:06:18.699 { 00:06:18.699 "subsystems": [ 00:06:18.699 { 00:06:18.699 "subsystem": "bdev", 00:06:18.699 "config": [ 00:06:18.699 { 00:06:18.699 "params": { 00:06:18.699 "block_size": 512, 00:06:18.699 "num_blocks": 1048576, 00:06:18.699 "name": "malloc0" 00:06:18.699 }, 00:06:18.699 "method": "bdev_malloc_create" 00:06:18.699 }, 00:06:18.699 { 00:06:18.699 "params": { 00:06:18.699 "block_size": 512, 00:06:18.700 "num_blocks": 1048576, 00:06:18.700 "name": "malloc1" 00:06:18.700 }, 00:06:18.700 "method": "bdev_malloc_create" 00:06:18.700 }, 00:06:18.700 { 00:06:18.700 "method": "bdev_wait_for_examine" 00:06:18.700 } 00:06:18.700 ] 00:06:18.700 } 00:06:18.700 ] 00:06:18.700 } 00:06:18.959 [2024-11-15 10:24:19.627988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.959 [2024-11-15 10:24:19.673573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.959 [2024-11-15 10:24:19.728216] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:20.338  [2024-11-15T10:24:22.128Z] Copying: 204/512 [MB] (204 MBps) [2024-11-15T10:24:22.696Z] Copying: 408/512 [MB] (203 MBps) [2024-11-15T10:24:23.264Z] Copying: 512/512 [MB] (average 203 MBps) 00:06:22.411 00:06:22.411 10:24:23 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:06:22.411 10:24:23 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:06:22.411 10:24:23 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:22.411 10:24:23 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:22.411 { 00:06:22.411 "subsystems": [ 00:06:22.411 { 00:06:22.411 "subsystem": "bdev", 00:06:22.411 "config": [ 00:06:22.411 { 00:06:22.411 "params": { 00:06:22.411 "block_size": 512, 00:06:22.411 "num_blocks": 1048576, 00:06:22.411 "name": "malloc0" 00:06:22.411 }, 00:06:22.411 "method": "bdev_malloc_create" 00:06:22.411 }, 00:06:22.411 { 00:06:22.411 "params": { 00:06:22.411 "block_size": 512, 00:06:22.411 "num_blocks": 1048576, 00:06:22.411 "name": "malloc1" 00:06:22.411 }, 00:06:22.411 "method": "bdev_malloc_create" 00:06:22.411 }, 00:06:22.411 { 00:06:22.411 "method": "bdev_wait_for_examine" 00:06:22.411 } 00:06:22.411 ] 00:06:22.411 } 00:06:22.411 ] 00:06:22.411 } 00:06:22.411 [2024-11-15 10:24:23.210651] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:06:22.411 [2024-11-15 10:24:23.210776] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60691 ] 00:06:22.670 [2024-11-15 10:24:23.357597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.670 [2024-11-15 10:24:23.411262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.670 [2024-11-15 10:24:23.470270] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:24.057  [2024-11-15T10:24:25.846Z] Copying: 215/512 [MB] (215 MBps) [2024-11-15T10:24:26.413Z] Copying: 431/512 [MB] (215 MBps) [2024-11-15T10:24:26.981Z] Copying: 512/512 [MB] (average 216 MBps) 00:06:26.128 00:06:26.128 00:06:26.128 real 0m7.303s 00:06:26.128 user 0m6.313s 00:06:26.128 sys 0m0.840s 00:06:26.128 10:24:26 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:26.128 ************************************ 00:06:26.128 END TEST dd_malloc_copy 00:06:26.128 10:24:26 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:26.128 ************************************ 00:06:26.128 00:06:26.128 real 0m7.541s 00:06:26.128 user 0m6.445s 00:06:26.128 sys 0m0.950s 00:06:26.128 10:24:26 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:26.128 10:24:26 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:06:26.128 ************************************ 00:06:26.128 END TEST spdk_dd_malloc 00:06:26.128 ************************************ 00:06:26.129 10:24:26 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:06:26.129 10:24:26 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:06:26.129 10:24:26 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:26.129 10:24:26 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:26.129 ************************************ 00:06:26.129 START TEST spdk_dd_bdev_to_bdev 00:06:26.129 ************************************ 00:06:26.129 10:24:26 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:06:26.129 * Looking for test storage... 00:06:26.129 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:26.129 10:24:26 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:26.129 10:24:26 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1691 -- # lcov --version 00:06:26.129 10:24:26 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:26.388 10:24:26 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:26.388 10:24:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:26.388 10:24:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:26.388 10:24:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:26.388 10:24:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:06:26.388 10:24:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:06:26.388 10:24:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:06:26.388 10:24:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:06:26.388 10:24:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:06:26.388 10:24:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:06:26.388 10:24:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:06:26.388 10:24:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:26.388 10:24:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:06:26.388 10:24:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:06:26.388 10:24:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:26.388 10:24:27 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:26.388 10:24:27 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:06:26.388 10:24:27 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:06:26.388 10:24:27 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:26.388 10:24:27 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:06:26.388 10:24:27 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:06:26.388 10:24:27 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:06:26.388 10:24:27 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:06:26.388 10:24:27 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:26.388 10:24:27 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:06:26.388 10:24:27 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:06:26.388 10:24:27 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:26.389 10:24:27 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:26.389 10:24:27 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:06:26.389 10:24:27 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:26.389 10:24:27 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:26.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.389 --rc genhtml_branch_coverage=1 00:06:26.389 --rc genhtml_function_coverage=1 00:06:26.389 --rc genhtml_legend=1 00:06:26.389 --rc geninfo_all_blocks=1 00:06:26.389 --rc geninfo_unexecuted_blocks=1 00:06:26.389 00:06:26.389 ' 00:06:26.389 10:24:27 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:26.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.389 --rc genhtml_branch_coverage=1 00:06:26.389 --rc genhtml_function_coverage=1 00:06:26.389 --rc genhtml_legend=1 00:06:26.389 --rc geninfo_all_blocks=1 00:06:26.389 --rc geninfo_unexecuted_blocks=1 00:06:26.389 00:06:26.389 ' 00:06:26.389 10:24:27 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:26.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.389 --rc genhtml_branch_coverage=1 00:06:26.389 --rc genhtml_function_coverage=1 00:06:26.389 --rc genhtml_legend=1 00:06:26.389 --rc geninfo_all_blocks=1 00:06:26.389 --rc geninfo_unexecuted_blocks=1 00:06:26.389 00:06:26.389 ' 00:06:26.389 10:24:27 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:26.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.389 --rc genhtml_branch_coverage=1 00:06:26.389 --rc genhtml_function_coverage=1 00:06:26.389 --rc genhtml_legend=1 00:06:26.389 --rc geninfo_all_blocks=1 00:06:26.389 --rc geninfo_unexecuted_blocks=1 00:06:26.389 00:06:26.389 ' 00:06:26.389 10:24:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:26.389 10:24:27 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:06:26.389 10:24:27 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:26.389 10:24:27 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:26.389 10:24:27 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:26.389 10:24:27 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.389 10:24:27 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.389 10:24:27 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.389 10:24:27 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:06:26.389 10:24:27 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.389 10:24:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:06:26.389 10:24:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:06:26.389 10:24:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:06:26.389 10:24:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:06:26.389 10:24:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:06:26.389 10:24:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:06:26.389 10:24:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:06:26.389 10:24:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:06:26.389 10:24:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:06:26.389 10:24:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:06:26.389 10:24:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:06:26.389 10:24:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:26.389 10:24:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:06:26.389 10:24:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:06:26.389 10:24:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:26.389 10:24:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:26.389 10:24:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:06:26.389 10:24:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:06:26.389 10:24:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:06:26.389 10:24:27 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:06:26.389 10:24:27 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:26.389 10:24:27 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:26.389 ************************************ 00:06:26.389 START TEST dd_inflate_file 00:06:26.389 ************************************ 00:06:26.389 10:24:27 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:06:26.389 [2024-11-15 10:24:27.083811] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:06:26.389 [2024-11-15 10:24:27.083951] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60809 ] 00:06:26.389 [2024-11-15 10:24:27.229885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.648 [2024-11-15 10:24:27.283270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.648 [2024-11-15 10:24:27.336752] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:26.648  [2024-11-15T10:24:27.759Z] Copying: 64/64 [MB] (average 1600 MBps) 00:06:26.906 00:06:26.906 00:06:26.906 real 0m0.562s 00:06:26.907 user 0m0.316s 00:06:26.907 sys 0m0.299s 00:06:26.907 10:24:27 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:26.907 10:24:27 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:06:26.907 ************************************ 00:06:26.907 END TEST dd_inflate_file 00:06:26.907 ************************************ 00:06:26.907 10:24:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:06:26.907 10:24:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:06:26.907 10:24:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:06:26.907 10:24:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:06:26.907 10:24:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:26.907 10:24:27 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:06:26.907 10:24:27 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:26.907 10:24:27 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:26.907 10:24:27 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:26.907 ************************************ 00:06:26.907 START TEST dd_copy_to_out_bdev 00:06:26.907 ************************************ 00:06:26.907 10:24:27 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:06:26.907 { 00:06:26.907 "subsystems": [ 00:06:26.907 { 00:06:26.907 "subsystem": "bdev", 00:06:26.907 "config": [ 00:06:26.907 { 00:06:26.907 "params": { 00:06:26.907 "trtype": "pcie", 00:06:26.907 "traddr": "0000:00:10.0", 00:06:26.907 "name": "Nvme0" 00:06:26.907 }, 00:06:26.907 "method": "bdev_nvme_attach_controller" 00:06:26.907 }, 00:06:26.907 { 00:06:26.907 "params": { 00:06:26.907 "trtype": "pcie", 00:06:26.907 "traddr": "0000:00:11.0", 00:06:26.907 "name": "Nvme1" 00:06:26.907 }, 00:06:26.907 "method": "bdev_nvme_attach_controller" 00:06:26.907 }, 00:06:26.907 { 00:06:26.907 "method": "bdev_wait_for_examine" 00:06:26.907 } 00:06:26.907 ] 00:06:26.907 } 00:06:26.907 ] 00:06:26.907 } 00:06:26.907 [2024-11-15 10:24:27.708895] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:06:26.907 [2024-11-15 10:24:27.709000] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60841 ] 00:06:27.165 [2024-11-15 10:24:27.853800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.165 [2024-11-15 10:24:27.914683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.165 [2024-11-15 10:24:27.969282] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:28.540  [2024-11-15T10:24:29.393Z] Copying: 55/64 [MB] (55 MBps) [2024-11-15T10:24:29.653Z] Copying: 64/64 [MB] (average 55 MBps) 00:06:28.800 00:06:28.800 00:06:28.800 real 0m1.866s 00:06:28.800 user 0m1.641s 00:06:28.800 sys 0m1.497s 00:06:28.800 10:24:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:28.800 ************************************ 00:06:28.800 END TEST dd_copy_to_out_bdev 00:06:28.800 ************************************ 00:06:28.800 10:24:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:28.800 10:24:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:06:28.800 10:24:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:06:28.800 10:24:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:28.800 10:24:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:28.800 10:24:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:28.800 ************************************ 00:06:28.800 START TEST dd_offset_magic 00:06:28.800 ************************************ 00:06:28.800 10:24:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1127 -- # offset_magic 00:06:28.800 10:24:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:06:28.800 10:24:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:06:28.800 10:24:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:06:28.800 10:24:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:06:28.800 10:24:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:06:28.800 10:24:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:06:28.800 10:24:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:28.800 10:24:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:28.800 [2024-11-15 10:24:29.632920] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:06:28.800 [2024-11-15 10:24:29.633038] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60882 ] 00:06:28.800 { 00:06:28.800 "subsystems": [ 00:06:28.800 { 00:06:28.800 "subsystem": "bdev", 00:06:28.800 "config": [ 00:06:28.800 { 00:06:28.800 "params": { 00:06:28.800 "trtype": "pcie", 00:06:28.800 "traddr": "0000:00:10.0", 00:06:28.800 "name": "Nvme0" 00:06:28.800 }, 00:06:28.800 "method": "bdev_nvme_attach_controller" 00:06:28.800 }, 00:06:28.800 { 00:06:28.800 "params": { 00:06:28.800 "trtype": "pcie", 00:06:28.800 "traddr": "0000:00:11.0", 00:06:28.800 "name": "Nvme1" 00:06:28.800 }, 00:06:28.800 "method": "bdev_nvme_attach_controller" 00:06:28.800 }, 00:06:28.800 { 00:06:28.800 "method": "bdev_wait_for_examine" 00:06:28.800 } 00:06:28.800 ] 00:06:28.800 } 00:06:28.800 ] 00:06:28.800 } 00:06:29.059 [2024-11-15 10:24:29.781707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.059 [2024-11-15 10:24:29.824818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.059 [2024-11-15 10:24:29.876030] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:29.318  [2024-11-15T10:24:30.429Z] Copying: 65/65 [MB] (average 866 MBps) 00:06:29.576 00:06:29.576 10:24:30 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:06:29.576 10:24:30 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:06:29.576 10:24:30 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:29.576 10:24:30 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:29.576 [2024-11-15 10:24:30.407543] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:06:29.576 [2024-11-15 10:24:30.407641] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60902 ] 00:06:29.576 { 00:06:29.576 "subsystems": [ 00:06:29.576 { 00:06:29.576 "subsystem": "bdev", 00:06:29.576 "config": [ 00:06:29.576 { 00:06:29.576 "params": { 00:06:29.576 "trtype": "pcie", 00:06:29.576 "traddr": "0000:00:10.0", 00:06:29.576 "name": "Nvme0" 00:06:29.576 }, 00:06:29.576 "method": "bdev_nvme_attach_controller" 00:06:29.576 }, 00:06:29.576 { 00:06:29.576 "params": { 00:06:29.576 "trtype": "pcie", 00:06:29.576 "traddr": "0000:00:11.0", 00:06:29.576 "name": "Nvme1" 00:06:29.576 }, 00:06:29.576 "method": "bdev_nvme_attach_controller" 00:06:29.576 }, 00:06:29.576 { 00:06:29.576 "method": "bdev_wait_for_examine" 00:06:29.576 } 00:06:29.576 ] 00:06:29.576 } 00:06:29.576 ] 00:06:29.576 } 00:06:29.835 [2024-11-15 10:24:30.555725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.835 [2024-11-15 10:24:30.605234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.835 [2024-11-15 10:24:30.657907] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:30.094  [2024-11-15T10:24:31.205Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:06:30.352 00:06:30.352 10:24:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:06:30.352 10:24:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:06:30.352 10:24:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:06:30.353 10:24:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:06:30.353 10:24:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:06:30.353 10:24:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:30.353 10:24:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:30.353 [2024-11-15 10:24:31.057663] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:06:30.353 [2024-11-15 10:24:31.057794] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60924 ] 00:06:30.353 { 00:06:30.353 "subsystems": [ 00:06:30.353 { 00:06:30.353 "subsystem": "bdev", 00:06:30.353 "config": [ 00:06:30.353 { 00:06:30.353 "params": { 00:06:30.353 "trtype": "pcie", 00:06:30.353 "traddr": "0000:00:10.0", 00:06:30.353 "name": "Nvme0" 00:06:30.353 }, 00:06:30.353 "method": "bdev_nvme_attach_controller" 00:06:30.353 }, 00:06:30.353 { 00:06:30.353 "params": { 00:06:30.353 "trtype": "pcie", 00:06:30.353 "traddr": "0000:00:11.0", 00:06:30.353 "name": "Nvme1" 00:06:30.353 }, 00:06:30.353 "method": "bdev_nvme_attach_controller" 00:06:30.353 }, 00:06:30.353 { 00:06:30.353 "method": "bdev_wait_for_examine" 00:06:30.353 } 00:06:30.353 ] 00:06:30.353 } 00:06:30.353 ] 00:06:30.353 } 00:06:30.353 [2024-11-15 10:24:31.200912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.614 [2024-11-15 10:24:31.241770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.614 [2024-11-15 10:24:31.293760] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:30.874  [2024-11-15T10:24:31.986Z] Copying: 65/65 [MB] (average 955 MBps) 00:06:31.133 00:06:31.133 10:24:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:06:31.133 10:24:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:06:31.133 10:24:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:31.133 10:24:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:31.133 [2024-11-15 10:24:31.810499] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:06:31.133 [2024-11-15 10:24:31.810606] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60933 ] 00:06:31.133 { 00:06:31.133 "subsystems": [ 00:06:31.133 { 00:06:31.133 "subsystem": "bdev", 00:06:31.133 "config": [ 00:06:31.133 { 00:06:31.133 "params": { 00:06:31.133 "trtype": "pcie", 00:06:31.133 "traddr": "0000:00:10.0", 00:06:31.133 "name": "Nvme0" 00:06:31.133 }, 00:06:31.133 "method": "bdev_nvme_attach_controller" 00:06:31.133 }, 00:06:31.133 { 00:06:31.133 "params": { 00:06:31.133 "trtype": "pcie", 00:06:31.133 "traddr": "0000:00:11.0", 00:06:31.133 "name": "Nvme1" 00:06:31.133 }, 00:06:31.133 "method": "bdev_nvme_attach_controller" 00:06:31.133 }, 00:06:31.133 { 00:06:31.133 "method": "bdev_wait_for_examine" 00:06:31.133 } 00:06:31.133 ] 00:06:31.133 } 00:06:31.133 ] 00:06:31.133 } 00:06:31.133 [2024-11-15 10:24:31.960515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.393 [2024-11-15 10:24:32.006377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.393 [2024-11-15 10:24:32.059074] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:31.393  [2024-11-15T10:24:32.506Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:06:31.653 00:06:31.653 10:24:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:06:31.653 10:24:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:06:31.653 00:06:31.653 real 0m2.836s 00:06:31.653 user 0m2.050s 00:06:31.653 sys 0m0.862s 00:06:31.653 10:24:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:31.653 10:24:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:31.653 ************************************ 00:06:31.653 END TEST dd_offset_magic 00:06:31.653 ************************************ 00:06:31.653 10:24:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:06:31.653 10:24:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:06:31.653 10:24:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:31.653 10:24:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:06:31.653 10:24:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:06:31.653 10:24:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:06:31.653 10:24:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:06:31.653 10:24:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:06:31.653 10:24:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:06:31.653 10:24:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:31.653 10:24:32 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:31.912 [2024-11-15 10:24:32.510342] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:06:31.913 [2024-11-15 10:24:32.510479] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60970 ] 00:06:31.913 { 00:06:31.913 "subsystems": [ 00:06:31.913 { 00:06:31.913 "subsystem": "bdev", 00:06:31.913 "config": [ 00:06:31.913 { 00:06:31.913 "params": { 00:06:31.913 "trtype": "pcie", 00:06:31.913 "traddr": "0000:00:10.0", 00:06:31.913 "name": "Nvme0" 00:06:31.913 }, 00:06:31.913 "method": "bdev_nvme_attach_controller" 00:06:31.913 }, 00:06:31.913 { 00:06:31.913 "params": { 00:06:31.913 "trtype": "pcie", 00:06:31.913 "traddr": "0000:00:11.0", 00:06:31.913 "name": "Nvme1" 00:06:31.913 }, 00:06:31.913 "method": "bdev_nvme_attach_controller" 00:06:31.913 }, 00:06:31.913 { 00:06:31.913 "method": "bdev_wait_for_examine" 00:06:31.913 } 00:06:31.913 ] 00:06:31.913 } 00:06:31.913 ] 00:06:31.913 } 00:06:31.913 [2024-11-15 10:24:32.657855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.913 [2024-11-15 10:24:32.700226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.913 [2024-11-15 10:24:32.751922] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:32.171  [2024-11-15T10:24:33.282Z] Copying: 5120/5120 [kB] (average 1000 MBps) 00:06:32.429 00:06:32.429 10:24:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:06:32.429 10:24:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:06:32.429 10:24:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:06:32.429 10:24:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:06:32.429 10:24:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:06:32.429 10:24:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:06:32.429 10:24:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:06:32.429 10:24:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:06:32.429 10:24:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:32.429 10:24:33 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:32.429 { 00:06:32.429 "subsystems": [ 00:06:32.429 { 00:06:32.429 "subsystem": "bdev", 00:06:32.429 "config": [ 00:06:32.429 { 00:06:32.429 "params": { 00:06:32.429 "trtype": "pcie", 00:06:32.429 "traddr": "0000:00:10.0", 00:06:32.429 "name": "Nvme0" 00:06:32.429 }, 00:06:32.429 "method": "bdev_nvme_attach_controller" 00:06:32.429 }, 00:06:32.429 { 00:06:32.429 "params": { 00:06:32.429 "trtype": "pcie", 00:06:32.429 "traddr": "0000:00:11.0", 00:06:32.429 "name": "Nvme1" 00:06:32.429 }, 00:06:32.429 "method": "bdev_nvme_attach_controller" 00:06:32.429 }, 00:06:32.429 { 00:06:32.429 "method": "bdev_wait_for_examine" 00:06:32.429 } 00:06:32.429 ] 00:06:32.429 } 00:06:32.429 ] 00:06:32.429 } 00:06:32.429 [2024-11-15 10:24:33.173615] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:06:32.429 [2024-11-15 10:24:33.173746] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60991 ] 00:06:32.688 [2024-11-15 10:24:33.318791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.688 [2024-11-15 10:24:33.358764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.688 [2024-11-15 10:24:33.411192] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:32.946  [2024-11-15T10:24:33.799Z] Copying: 5120/5120 [kB] (average 714 MBps) 00:06:32.946 00:06:32.946 10:24:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:06:33.207 00:06:33.207 real 0m6.974s 00:06:33.207 user 0m5.128s 00:06:33.207 sys 0m3.346s 00:06:33.207 10:24:33 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:33.207 ************************************ 00:06:33.207 10:24:33 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:33.207 END TEST spdk_dd_bdev_to_bdev 00:06:33.207 ************************************ 00:06:33.207 10:24:33 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:06:33.207 10:24:33 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:06:33.207 10:24:33 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:33.207 10:24:33 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:33.207 10:24:33 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:33.207 ************************************ 00:06:33.207 START TEST spdk_dd_uring 00:06:33.207 ************************************ 00:06:33.207 10:24:33 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:06:33.207 * Looking for test storage... 00:06:33.207 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:33.207 10:24:33 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:33.207 10:24:33 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1691 -- # lcov --version 00:06:33.207 10:24:33 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:33.207 10:24:34 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:33.207 10:24:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:33.207 10:24:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:33.207 10:24:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:33.207 10:24:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:06:33.207 10:24:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:06:33.207 10:24:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:06:33.207 10:24:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:06:33.207 10:24:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:06:33.207 10:24:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:06:33.207 10:24:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:06:33.207 10:24:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:33.207 10:24:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:06:33.207 10:24:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:06:33.207 10:24:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:33.207 10:24:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:33.207 10:24:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:06:33.207 10:24:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:06:33.207 10:24:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:33.207 10:24:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:06:33.207 10:24:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:06:33.207 10:24:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:06:33.207 10:24:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:06:33.207 10:24:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:33.207 10:24:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:06:33.207 10:24:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:06:33.207 10:24:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:33.207 10:24:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:33.207 10:24:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:06:33.207 10:24:34 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:33.207 10:24:34 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:33.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.207 --rc genhtml_branch_coverage=1 00:06:33.207 --rc genhtml_function_coverage=1 00:06:33.207 --rc genhtml_legend=1 00:06:33.207 --rc geninfo_all_blocks=1 00:06:33.207 --rc geninfo_unexecuted_blocks=1 00:06:33.207 00:06:33.207 ' 00:06:33.207 10:24:34 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:33.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.207 --rc genhtml_branch_coverage=1 00:06:33.207 --rc genhtml_function_coverage=1 00:06:33.207 --rc genhtml_legend=1 00:06:33.207 --rc geninfo_all_blocks=1 00:06:33.208 --rc geninfo_unexecuted_blocks=1 00:06:33.208 00:06:33.208 ' 00:06:33.208 10:24:34 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:33.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.208 --rc genhtml_branch_coverage=1 00:06:33.208 --rc genhtml_function_coverage=1 00:06:33.208 --rc genhtml_legend=1 00:06:33.208 --rc geninfo_all_blocks=1 00:06:33.208 --rc geninfo_unexecuted_blocks=1 00:06:33.208 00:06:33.208 ' 00:06:33.208 10:24:34 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:33.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.208 --rc genhtml_branch_coverage=1 00:06:33.208 --rc genhtml_function_coverage=1 00:06:33.208 --rc genhtml_legend=1 00:06:33.208 --rc geninfo_all_blocks=1 00:06:33.208 --rc geninfo_unexecuted_blocks=1 00:06:33.208 00:06:33.208 ' 00:06:33.208 10:24:34 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:33.208 10:24:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:06:33.208 10:24:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:33.208 10:24:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:33.208 10:24:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:33.208 10:24:34 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.208 10:24:34 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.208 10:24:34 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.208 10:24:34 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:06:33.208 10:24:34 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.208 10:24:34 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:06:33.208 10:24:34 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:33.208 10:24:34 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:33.208 10:24:34 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:06:33.208 ************************************ 00:06:33.208 START TEST dd_uring_copy 00:06:33.208 ************************************ 00:06:33.208 10:24:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1127 -- # uring_zram_copy 00:06:33.208 10:24:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:06:33.208 10:24:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:06:33.208 10:24:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:06:33.208 10:24:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:33.208 10:24:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:06:33.208 10:24:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:06:33.208 10:24:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:06:33.208 10:24:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:06:33.208 10:24:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:06:33.208 10:24:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:06:33.208 10:24:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:06:33.208 10:24:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:06:33.208 10:24:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:06:33.208 10:24:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:06:33.208 10:24:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:06:33.208 10:24:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:06:33.208 10:24:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:06:33.208 10:24:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:06:33.208 10:24:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:06:33.208 10:24:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:06:33.208 10:24:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:06:33.208 10:24:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:06:33.208 10:24:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:06:33.208 10:24:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:06:33.208 10:24:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:33.467 10:24:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=1tvnlchbvrfw374qweqsgv82fopt0kcxeiuvuokn79n76gvnzdp728fzugtshw4zcupmygsao0w8salii6kw7xj2e7d77mk1563olydjxhq8c0sj4x2bwvupnt9t3zws2ejiiujxbeeep0cg7h9dxhrdgifvfh8ibq8luu4kjgf3a93cfhzbpyixhjorrqa77fnny81l7v3rku2dcyja34pp1hiqpg2lahsge7ui4heaoyyigkh915eeig2xa4vxx82yu0psew4t2ojgzs2a05z4atgclgg1da1f5h8uxm4hsqlnh6kcqcqjaudpdre0sysm5g5eq1q1q3zwkrwijltggbjh23s8g6b67rh4oltywq5txyt362zbtp640aazj9rgibul92ybiqa1y2mijv1uqbdnx6o9egp179yxuhnlee6l5wpw6cxpdljd3lwx82z3tvmt4jjvmpj7lx9v03akco3m3stkq4e2fbnj7675xphf6mmxh2yzykwcuw8r5y0kxhvgub6t3ui8zibxg51akt7wkzcxko4aaieq7bs0dpwti09n6lvq9fycy86da97056ega39ly2zoojxgse16jxujvfsqhx5n7ia1quw7fxjsd87cqswmcfgnxzx66e9rna0wx4yltsir011m556yanbd2ktyhpeosyo23pmzncl7t7e50enw6hzwql0neav3ye6btlz8awu8hfk73adftvtnxxu149bilhqoabucmdl8oytbfhpzdbsl2yyn05zzoqgkt4yl6woknbqina4ljaij6tpwztmw8wriudltta3kj4pb0h98seluu6tgseduals00prj6aupmgnrfr40vdeasimsp26c0e4z2514feqh43ujtzzwfcan3s6p2psgc11rxayhusrsrswhthaadzx17ezl1jted1eiihqxko6ktpv148xnrh866adk69ti4rk4guzwan9jz6gm952u5u2cync9j0x1px7ueon1q4zxtp778gyybzvq9v2o 00:06:33.467 10:24:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo 1tvnlchbvrfw374qweqsgv82fopt0kcxeiuvuokn79n76gvnzdp728fzugtshw4zcupmygsao0w8salii6kw7xj2e7d77mk1563olydjxhq8c0sj4x2bwvupnt9t3zws2ejiiujxbeeep0cg7h9dxhrdgifvfh8ibq8luu4kjgf3a93cfhzbpyixhjorrqa77fnny81l7v3rku2dcyja34pp1hiqpg2lahsge7ui4heaoyyigkh915eeig2xa4vxx82yu0psew4t2ojgzs2a05z4atgclgg1da1f5h8uxm4hsqlnh6kcqcqjaudpdre0sysm5g5eq1q1q3zwkrwijltggbjh23s8g6b67rh4oltywq5txyt362zbtp640aazj9rgibul92ybiqa1y2mijv1uqbdnx6o9egp179yxuhnlee6l5wpw6cxpdljd3lwx82z3tvmt4jjvmpj7lx9v03akco3m3stkq4e2fbnj7675xphf6mmxh2yzykwcuw8r5y0kxhvgub6t3ui8zibxg51akt7wkzcxko4aaieq7bs0dpwti09n6lvq9fycy86da97056ega39ly2zoojxgse16jxujvfsqhx5n7ia1quw7fxjsd87cqswmcfgnxzx66e9rna0wx4yltsir011m556yanbd2ktyhpeosyo23pmzncl7t7e50enw6hzwql0neav3ye6btlz8awu8hfk73adftvtnxxu149bilhqoabucmdl8oytbfhpzdbsl2yyn05zzoqgkt4yl6woknbqina4ljaij6tpwztmw8wriudltta3kj4pb0h98seluu6tgseduals00prj6aupmgnrfr40vdeasimsp26c0e4z2514feqh43ujtzzwfcan3s6p2psgc11rxayhusrsrswhthaadzx17ezl1jted1eiihqxko6ktpv148xnrh866adk69ti4rk4guzwan9jz6gm952u5u2cync9j0x1px7ueon1q4zxtp778gyybzvq9v2o 00:06:33.467 10:24:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:06:33.467 [2024-11-15 10:24:34.110167] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:06:33.467 [2024-11-15 10:24:34.110483] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61069 ] 00:06:33.467 [2024-11-15 10:24:34.257163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.467 [2024-11-15 10:24:34.300884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.726 [2024-11-15 10:24:34.352947] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:34.293  [2024-11-15T10:24:35.404Z] Copying: 511/511 [MB] (average 1177 MBps) 00:06:34.551 00:06:34.551 10:24:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:06:34.551 10:24:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:06:34.551 10:24:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:34.551 10:24:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:34.810 [2024-11-15 10:24:35.414847] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:06:34.810 [2024-11-15 10:24:35.415113] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61085 ] 00:06:34.810 { 00:06:34.810 "subsystems": [ 00:06:34.810 { 00:06:34.810 "subsystem": "bdev", 00:06:34.810 "config": [ 00:06:34.810 { 00:06:34.810 "params": { 00:06:34.810 "block_size": 512, 00:06:34.810 "num_blocks": 1048576, 00:06:34.810 "name": "malloc0" 00:06:34.810 }, 00:06:34.810 "method": "bdev_malloc_create" 00:06:34.810 }, 00:06:34.810 { 00:06:34.810 "params": { 00:06:34.810 "filename": "/dev/zram1", 00:06:34.810 "name": "uring0" 00:06:34.810 }, 00:06:34.810 "method": "bdev_uring_create" 00:06:34.810 }, 00:06:34.810 { 00:06:34.810 "method": "bdev_wait_for_examine" 00:06:34.810 } 00:06:34.810 ] 00:06:34.810 } 00:06:34.810 ] 00:06:34.810 } 00:06:34.810 [2024-11-15 10:24:35.557623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.810 [2024-11-15 10:24:35.599153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.810 [2024-11-15 10:24:35.653139] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:36.187  [2024-11-15T10:24:38.007Z] Copying: 246/512 [MB] (246 MBps) [2024-11-15T10:24:38.007Z] Copying: 494/512 [MB] (248 MBps) [2024-11-15T10:24:38.575Z] Copying: 512/512 [MB] (average 247 MBps) 00:06:37.722 00:06:37.722 10:24:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:06:37.722 10:24:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:06:37.723 10:24:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:37.723 10:24:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:37.723 [2024-11-15 10:24:38.341940] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:06:37.723 [2024-11-15 10:24:38.342234] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61129 ] 00:06:37.723 { 00:06:37.723 "subsystems": [ 00:06:37.723 { 00:06:37.723 "subsystem": "bdev", 00:06:37.723 "config": [ 00:06:37.723 { 00:06:37.723 "params": { 00:06:37.723 "block_size": 512, 00:06:37.723 "num_blocks": 1048576, 00:06:37.723 "name": "malloc0" 00:06:37.723 }, 00:06:37.723 "method": "bdev_malloc_create" 00:06:37.723 }, 00:06:37.723 { 00:06:37.723 "params": { 00:06:37.723 "filename": "/dev/zram1", 00:06:37.723 "name": "uring0" 00:06:37.723 }, 00:06:37.723 "method": "bdev_uring_create" 00:06:37.723 }, 00:06:37.723 { 00:06:37.723 "method": "bdev_wait_for_examine" 00:06:37.723 } 00:06:37.723 ] 00:06:37.723 } 00:06:37.723 ] 00:06:37.723 } 00:06:37.723 [2024-11-15 10:24:38.488966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.723 [2024-11-15 10:24:38.529839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.982 [2024-11-15 10:24:38.581085] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:38.919  [2024-11-15T10:24:41.150Z] Copying: 183/512 [MB] (183 MBps) [2024-11-15T10:24:41.718Z] Copying: 364/512 [MB] (181 MBps) [2024-11-15T10:24:41.977Z] Copying: 512/512 [MB] (average 182 MBps) 00:06:41.124 00:06:41.124 10:24:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:06:41.125 10:24:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ 1tvnlchbvrfw374qweqsgv82fopt0kcxeiuvuokn79n76gvnzdp728fzugtshw4zcupmygsao0w8salii6kw7xj2e7d77mk1563olydjxhq8c0sj4x2bwvupnt9t3zws2ejiiujxbeeep0cg7h9dxhrdgifvfh8ibq8luu4kjgf3a93cfhzbpyixhjorrqa77fnny81l7v3rku2dcyja34pp1hiqpg2lahsge7ui4heaoyyigkh915eeig2xa4vxx82yu0psew4t2ojgzs2a05z4atgclgg1da1f5h8uxm4hsqlnh6kcqcqjaudpdre0sysm5g5eq1q1q3zwkrwijltggbjh23s8g6b67rh4oltywq5txyt362zbtp640aazj9rgibul92ybiqa1y2mijv1uqbdnx6o9egp179yxuhnlee6l5wpw6cxpdljd3lwx82z3tvmt4jjvmpj7lx9v03akco3m3stkq4e2fbnj7675xphf6mmxh2yzykwcuw8r5y0kxhvgub6t3ui8zibxg51akt7wkzcxko4aaieq7bs0dpwti09n6lvq9fycy86da97056ega39ly2zoojxgse16jxujvfsqhx5n7ia1quw7fxjsd87cqswmcfgnxzx66e9rna0wx4yltsir011m556yanbd2ktyhpeosyo23pmzncl7t7e50enw6hzwql0neav3ye6btlz8awu8hfk73adftvtnxxu149bilhqoabucmdl8oytbfhpzdbsl2yyn05zzoqgkt4yl6woknbqina4ljaij6tpwztmw8wriudltta3kj4pb0h98seluu6tgseduals00prj6aupmgnrfr40vdeasimsp26c0e4z2514feqh43ujtzzwfcan3s6p2psgc11rxayhusrsrswhthaadzx17ezl1jted1eiihqxko6ktpv148xnrh866adk69ti4rk4guzwan9jz6gm952u5u2cync9j0x1px7ueon1q4zxtp778gyybzvq9v2o == \1\t\v\n\l\c\h\b\v\r\f\w\3\7\4\q\w\e\q\s\g\v\8\2\f\o\p\t\0\k\c\x\e\i\u\v\u\o\k\n\7\9\n\7\6\g\v\n\z\d\p\7\2\8\f\z\u\g\t\s\h\w\4\z\c\u\p\m\y\g\s\a\o\0\w\8\s\a\l\i\i\6\k\w\7\x\j\2\e\7\d\7\7\m\k\1\5\6\3\o\l\y\d\j\x\h\q\8\c\0\s\j\4\x\2\b\w\v\u\p\n\t\9\t\3\z\w\s\2\e\j\i\i\u\j\x\b\e\e\e\p\0\c\g\7\h\9\d\x\h\r\d\g\i\f\v\f\h\8\i\b\q\8\l\u\u\4\k\j\g\f\3\a\9\3\c\f\h\z\b\p\y\i\x\h\j\o\r\r\q\a\7\7\f\n\n\y\8\1\l\7\v\3\r\k\u\2\d\c\y\j\a\3\4\p\p\1\h\i\q\p\g\2\l\a\h\s\g\e\7\u\i\4\h\e\a\o\y\y\i\g\k\h\9\1\5\e\e\i\g\2\x\a\4\v\x\x\8\2\y\u\0\p\s\e\w\4\t\2\o\j\g\z\s\2\a\0\5\z\4\a\t\g\c\l\g\g\1\d\a\1\f\5\h\8\u\x\m\4\h\s\q\l\n\h\6\k\c\q\c\q\j\a\u\d\p\d\r\e\0\s\y\s\m\5\g\5\e\q\1\q\1\q\3\z\w\k\r\w\i\j\l\t\g\g\b\j\h\2\3\s\8\g\6\b\6\7\r\h\4\o\l\t\y\w\q\5\t\x\y\t\3\6\2\z\b\t\p\6\4\0\a\a\z\j\9\r\g\i\b\u\l\9\2\y\b\i\q\a\1\y\2\m\i\j\v\1\u\q\b\d\n\x\6\o\9\e\g\p\1\7\9\y\x\u\h\n\l\e\e\6\l\5\w\p\w\6\c\x\p\d\l\j\d\3\l\w\x\8\2\z\3\t\v\m\t\4\j\j\v\m\p\j\7\l\x\9\v\0\3\a\k\c\o\3\m\3\s\t\k\q\4\e\2\f\b\n\j\7\6\7\5\x\p\h\f\6\m\m\x\h\2\y\z\y\k\w\c\u\w\8\r\5\y\0\k\x\h\v\g\u\b\6\t\3\u\i\8\z\i\b\x\g\5\1\a\k\t\7\w\k\z\c\x\k\o\4\a\a\i\e\q\7\b\s\0\d\p\w\t\i\0\9\n\6\l\v\q\9\f\y\c\y\8\6\d\a\9\7\0\5\6\e\g\a\3\9\l\y\2\z\o\o\j\x\g\s\e\1\6\j\x\u\j\v\f\s\q\h\x\5\n\7\i\a\1\q\u\w\7\f\x\j\s\d\8\7\c\q\s\w\m\c\f\g\n\x\z\x\6\6\e\9\r\n\a\0\w\x\4\y\l\t\s\i\r\0\1\1\m\5\5\6\y\a\n\b\d\2\k\t\y\h\p\e\o\s\y\o\2\3\p\m\z\n\c\l\7\t\7\e\5\0\e\n\w\6\h\z\w\q\l\0\n\e\a\v\3\y\e\6\b\t\l\z\8\a\w\u\8\h\f\k\7\3\a\d\f\t\v\t\n\x\x\u\1\4\9\b\i\l\h\q\o\a\b\u\c\m\d\l\8\o\y\t\b\f\h\p\z\d\b\s\l\2\y\y\n\0\5\z\z\o\q\g\k\t\4\y\l\6\w\o\k\n\b\q\i\n\a\4\l\j\a\i\j\6\t\p\w\z\t\m\w\8\w\r\i\u\d\l\t\t\a\3\k\j\4\p\b\0\h\9\8\s\e\l\u\u\6\t\g\s\e\d\u\a\l\s\0\0\p\r\j\6\a\u\p\m\g\n\r\f\r\4\0\v\d\e\a\s\i\m\s\p\2\6\c\0\e\4\z\2\5\1\4\f\e\q\h\4\3\u\j\t\z\z\w\f\c\a\n\3\s\6\p\2\p\s\g\c\1\1\r\x\a\y\h\u\s\r\s\r\s\w\h\t\h\a\a\d\z\x\1\7\e\z\l\1\j\t\e\d\1\e\i\i\h\q\x\k\o\6\k\t\p\v\1\4\8\x\n\r\h\8\6\6\a\d\k\6\9\t\i\4\r\k\4\g\u\z\w\a\n\9\j\z\6\g\m\9\5\2\u\5\u\2\c\y\n\c\9\j\0\x\1\p\x\7\u\e\o\n\1\q\4\z\x\t\p\7\7\8\g\y\y\b\z\v\q\9\v\2\o ]] 00:06:41.125 10:24:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:06:41.125 10:24:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ 1tvnlchbvrfw374qweqsgv82fopt0kcxeiuvuokn79n76gvnzdp728fzugtshw4zcupmygsao0w8salii6kw7xj2e7d77mk1563olydjxhq8c0sj4x2bwvupnt9t3zws2ejiiujxbeeep0cg7h9dxhrdgifvfh8ibq8luu4kjgf3a93cfhzbpyixhjorrqa77fnny81l7v3rku2dcyja34pp1hiqpg2lahsge7ui4heaoyyigkh915eeig2xa4vxx82yu0psew4t2ojgzs2a05z4atgclgg1da1f5h8uxm4hsqlnh6kcqcqjaudpdre0sysm5g5eq1q1q3zwkrwijltggbjh23s8g6b67rh4oltywq5txyt362zbtp640aazj9rgibul92ybiqa1y2mijv1uqbdnx6o9egp179yxuhnlee6l5wpw6cxpdljd3lwx82z3tvmt4jjvmpj7lx9v03akco3m3stkq4e2fbnj7675xphf6mmxh2yzykwcuw8r5y0kxhvgub6t3ui8zibxg51akt7wkzcxko4aaieq7bs0dpwti09n6lvq9fycy86da97056ega39ly2zoojxgse16jxujvfsqhx5n7ia1quw7fxjsd87cqswmcfgnxzx66e9rna0wx4yltsir011m556yanbd2ktyhpeosyo23pmzncl7t7e50enw6hzwql0neav3ye6btlz8awu8hfk73adftvtnxxu149bilhqoabucmdl8oytbfhpzdbsl2yyn05zzoqgkt4yl6woknbqina4ljaij6tpwztmw8wriudltta3kj4pb0h98seluu6tgseduals00prj6aupmgnrfr40vdeasimsp26c0e4z2514feqh43ujtzzwfcan3s6p2psgc11rxayhusrsrswhthaadzx17ezl1jted1eiihqxko6ktpv148xnrh866adk69ti4rk4guzwan9jz6gm952u5u2cync9j0x1px7ueon1q4zxtp778gyybzvq9v2o == \1\t\v\n\l\c\h\b\v\r\f\w\3\7\4\q\w\e\q\s\g\v\8\2\f\o\p\t\0\k\c\x\e\i\u\v\u\o\k\n\7\9\n\7\6\g\v\n\z\d\p\7\2\8\f\z\u\g\t\s\h\w\4\z\c\u\p\m\y\g\s\a\o\0\w\8\s\a\l\i\i\6\k\w\7\x\j\2\e\7\d\7\7\m\k\1\5\6\3\o\l\y\d\j\x\h\q\8\c\0\s\j\4\x\2\b\w\v\u\p\n\t\9\t\3\z\w\s\2\e\j\i\i\u\j\x\b\e\e\e\p\0\c\g\7\h\9\d\x\h\r\d\g\i\f\v\f\h\8\i\b\q\8\l\u\u\4\k\j\g\f\3\a\9\3\c\f\h\z\b\p\y\i\x\h\j\o\r\r\q\a\7\7\f\n\n\y\8\1\l\7\v\3\r\k\u\2\d\c\y\j\a\3\4\p\p\1\h\i\q\p\g\2\l\a\h\s\g\e\7\u\i\4\h\e\a\o\y\y\i\g\k\h\9\1\5\e\e\i\g\2\x\a\4\v\x\x\8\2\y\u\0\p\s\e\w\4\t\2\o\j\g\z\s\2\a\0\5\z\4\a\t\g\c\l\g\g\1\d\a\1\f\5\h\8\u\x\m\4\h\s\q\l\n\h\6\k\c\q\c\q\j\a\u\d\p\d\r\e\0\s\y\s\m\5\g\5\e\q\1\q\1\q\3\z\w\k\r\w\i\j\l\t\g\g\b\j\h\2\3\s\8\g\6\b\6\7\r\h\4\o\l\t\y\w\q\5\t\x\y\t\3\6\2\z\b\t\p\6\4\0\a\a\z\j\9\r\g\i\b\u\l\9\2\y\b\i\q\a\1\y\2\m\i\j\v\1\u\q\b\d\n\x\6\o\9\e\g\p\1\7\9\y\x\u\h\n\l\e\e\6\l\5\w\p\w\6\c\x\p\d\l\j\d\3\l\w\x\8\2\z\3\t\v\m\t\4\j\j\v\m\p\j\7\l\x\9\v\0\3\a\k\c\o\3\m\3\s\t\k\q\4\e\2\f\b\n\j\7\6\7\5\x\p\h\f\6\m\m\x\h\2\y\z\y\k\w\c\u\w\8\r\5\y\0\k\x\h\v\g\u\b\6\t\3\u\i\8\z\i\b\x\g\5\1\a\k\t\7\w\k\z\c\x\k\o\4\a\a\i\e\q\7\b\s\0\d\p\w\t\i\0\9\n\6\l\v\q\9\f\y\c\y\8\6\d\a\9\7\0\5\6\e\g\a\3\9\l\y\2\z\o\o\j\x\g\s\e\1\6\j\x\u\j\v\f\s\q\h\x\5\n\7\i\a\1\q\u\w\7\f\x\j\s\d\8\7\c\q\s\w\m\c\f\g\n\x\z\x\6\6\e\9\r\n\a\0\w\x\4\y\l\t\s\i\r\0\1\1\m\5\5\6\y\a\n\b\d\2\k\t\y\h\p\e\o\s\y\o\2\3\p\m\z\n\c\l\7\t\7\e\5\0\e\n\w\6\h\z\w\q\l\0\n\e\a\v\3\y\e\6\b\t\l\z\8\a\w\u\8\h\f\k\7\3\a\d\f\t\v\t\n\x\x\u\1\4\9\b\i\l\h\q\o\a\b\u\c\m\d\l\8\o\y\t\b\f\h\p\z\d\b\s\l\2\y\y\n\0\5\z\z\o\q\g\k\t\4\y\l\6\w\o\k\n\b\q\i\n\a\4\l\j\a\i\j\6\t\p\w\z\t\m\w\8\w\r\i\u\d\l\t\t\a\3\k\j\4\p\b\0\h\9\8\s\e\l\u\u\6\t\g\s\e\d\u\a\l\s\0\0\p\r\j\6\a\u\p\m\g\n\r\f\r\4\0\v\d\e\a\s\i\m\s\p\2\6\c\0\e\4\z\2\5\1\4\f\e\q\h\4\3\u\j\t\z\z\w\f\c\a\n\3\s\6\p\2\p\s\g\c\1\1\r\x\a\y\h\u\s\r\s\r\s\w\h\t\h\a\a\d\z\x\1\7\e\z\l\1\j\t\e\d\1\e\i\i\h\q\x\k\o\6\k\t\p\v\1\4\8\x\n\r\h\8\6\6\a\d\k\6\9\t\i\4\r\k\4\g\u\z\w\a\n\9\j\z\6\g\m\9\5\2\u\5\u\2\c\y\n\c\9\j\0\x\1\p\x\7\u\e\o\n\1\q\4\z\x\t\p\7\7\8\g\y\y\b\z\v\q\9\v\2\o ]] 00:06:41.125 10:24:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:41.692 10:24:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:06:41.692 10:24:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:06:41.692 10:24:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:41.692 10:24:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:41.692 { 00:06:41.692 "subsystems": [ 00:06:41.692 { 00:06:41.692 "subsystem": "bdev", 00:06:41.692 "config": [ 00:06:41.692 { 00:06:41.692 "params": { 00:06:41.692 "block_size": 512, 00:06:41.692 "num_blocks": 1048576, 00:06:41.692 "name": "malloc0" 00:06:41.692 }, 00:06:41.692 "method": "bdev_malloc_create" 00:06:41.692 }, 00:06:41.692 { 00:06:41.692 "params": { 00:06:41.692 "filename": "/dev/zram1", 00:06:41.692 "name": "uring0" 00:06:41.692 }, 00:06:41.692 "method": "bdev_uring_create" 00:06:41.692 }, 00:06:41.692 { 00:06:41.692 "method": "bdev_wait_for_examine" 00:06:41.692 } 00:06:41.692 ] 00:06:41.692 } 00:06:41.692 ] 00:06:41.692 } 00:06:41.692 [2024-11-15 10:24:42.337474] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:06:41.692 [2024-11-15 10:24:42.337767] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61198 ] 00:06:41.692 [2024-11-15 10:24:42.484572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.692 [2024-11-15 10:24:42.525318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.951 [2024-11-15 10:24:42.578534] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:43.328  [2024-11-15T10:24:45.115Z] Copying: 168/512 [MB] (168 MBps) [2024-11-15T10:24:46.047Z] Copying: 335/512 [MB] (167 MBps) [2024-11-15T10:24:46.047Z] Copying: 501/512 [MB] (165 MBps) [2024-11-15T10:24:46.306Z] Copying: 512/512 [MB] (average 167 MBps) 00:06:45.453 00:06:45.453 10:24:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:06:45.453 10:24:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:06:45.453 10:24:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:06:45.453 10:24:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:06:45.453 10:24:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:06:45.453 10:24:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:06:45.453 10:24:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:45.453 10:24:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:45.453 [2024-11-15 10:24:46.291781] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:06:45.453 [2024-11-15 10:24:46.292934] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61248 ] 00:06:45.712 { 00:06:45.712 "subsystems": [ 00:06:45.712 { 00:06:45.712 "subsystem": "bdev", 00:06:45.712 "config": [ 00:06:45.712 { 00:06:45.712 "params": { 00:06:45.712 "block_size": 512, 00:06:45.712 "num_blocks": 1048576, 00:06:45.712 "name": "malloc0" 00:06:45.712 }, 00:06:45.712 "method": "bdev_malloc_create" 00:06:45.712 }, 00:06:45.712 { 00:06:45.712 "params": { 00:06:45.712 "filename": "/dev/zram1", 00:06:45.712 "name": "uring0" 00:06:45.712 }, 00:06:45.712 "method": "bdev_uring_create" 00:06:45.712 }, 00:06:45.712 { 00:06:45.712 "params": { 00:06:45.712 "name": "uring0" 00:06:45.712 }, 00:06:45.712 "method": "bdev_uring_delete" 00:06:45.712 }, 00:06:45.712 { 00:06:45.712 "method": "bdev_wait_for_examine" 00:06:45.712 } 00:06:45.712 ] 00:06:45.712 } 00:06:45.712 ] 00:06:45.712 } 00:06:45.712 [2024-11-15 10:24:46.444504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.712 [2024-11-15 10:24:46.501232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.712 [2024-11-15 10:24:46.553160] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:45.971  [2024-11-15T10:24:47.391Z] Copying: 0/0 [B] (average 0 Bps) 00:06:46.538 00:06:46.538 10:24:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:06:46.538 10:24:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # local es=0 00:06:46.538 10:24:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:06:46.538 10:24:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.538 10:24:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:06:46.538 10:24:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:06:46.538 10:24:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:46.538 10:24:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:46.538 10:24:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:46.538 10:24:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.538 10:24:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:46.538 10:24:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.538 10:24:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:46.538 10:24:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.538 10:24:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:46.538 10:24:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:06:46.538 [2024-11-15 10:24:47.203960] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:06:46.538 [2024-11-15 10:24:47.204086] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61284 ] 00:06:46.538 { 00:06:46.538 "subsystems": [ 00:06:46.538 { 00:06:46.538 "subsystem": "bdev", 00:06:46.538 "config": [ 00:06:46.538 { 00:06:46.538 "params": { 00:06:46.538 "block_size": 512, 00:06:46.538 "num_blocks": 1048576, 00:06:46.538 "name": "malloc0" 00:06:46.538 }, 00:06:46.538 "method": "bdev_malloc_create" 00:06:46.538 }, 00:06:46.538 { 00:06:46.538 "params": { 00:06:46.538 "filename": "/dev/zram1", 00:06:46.539 "name": "uring0" 00:06:46.539 }, 00:06:46.539 "method": "bdev_uring_create" 00:06:46.539 }, 00:06:46.539 { 00:06:46.539 "params": { 00:06:46.539 "name": "uring0" 00:06:46.539 }, 00:06:46.539 "method": "bdev_uring_delete" 00:06:46.539 }, 00:06:46.539 { 00:06:46.539 "method": "bdev_wait_for_examine" 00:06:46.539 } 00:06:46.539 ] 00:06:46.539 } 00:06:46.539 ] 00:06:46.539 } 00:06:46.539 [2024-11-15 10:24:47.349078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.797 [2024-11-15 10:24:47.396666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.797 [2024-11-15 10:24:47.446705] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:46.797 [2024-11-15 10:24:47.642782] bdev.c:8619:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:06:46.797 [2024-11-15 10:24:47.642829] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:06:46.797 [2024-11-15 10:24:47.642840] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:06:46.797 [2024-11-15 10:24:47.642849] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:47.364 [2024-11-15 10:24:47.949583] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:47.364 10:24:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # es=237 00:06:47.364 10:24:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:47.364 10:24:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@662 -- # es=109 00:06:47.364 10:24:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # case "$es" in 00:06:47.364 10:24:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@670 -- # es=1 00:06:47.364 10:24:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:47.364 10:24:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:06:47.364 10:24:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:06:47.364 10:24:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:06:47.364 10:24:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:06:47.364 10:24:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:06:47.364 10:24:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:47.622 00:06:47.622 real 0m14.248s 00:06:47.622 user 0m9.555s 00:06:47.622 sys 0m12.113s 00:06:47.622 10:24:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:47.622 10:24:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:47.622 ************************************ 00:06:47.622 END TEST dd_uring_copy 00:06:47.622 ************************************ 00:06:47.622 ************************************ 00:06:47.622 END TEST spdk_dd_uring 00:06:47.622 ************************************ 00:06:47.622 00:06:47.622 real 0m14.474s 00:06:47.622 user 0m9.678s 00:06:47.622 sys 0m12.217s 00:06:47.622 10:24:48 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:47.622 10:24:48 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:06:47.622 10:24:48 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:06:47.622 10:24:48 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:47.622 10:24:48 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:47.622 10:24:48 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:47.622 ************************************ 00:06:47.622 START TEST spdk_dd_sparse 00:06:47.622 ************************************ 00:06:47.622 10:24:48 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:06:47.622 * Looking for test storage... 00:06:47.622 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:47.622 10:24:48 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:47.622 10:24:48 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1691 -- # lcov --version 00:06:47.622 10:24:48 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:47.880 10:24:48 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:47.880 10:24:48 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:47.880 10:24:48 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:47.880 10:24:48 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:47.880 10:24:48 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:06:47.880 10:24:48 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:06:47.880 10:24:48 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:06:47.880 10:24:48 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:06:47.880 10:24:48 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:06:47.880 10:24:48 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:06:47.880 10:24:48 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:06:47.880 10:24:48 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:47.880 10:24:48 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:06:47.880 10:24:48 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:06:47.880 10:24:48 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:47.880 10:24:48 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:47.880 10:24:48 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:06:47.880 10:24:48 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:06:47.880 10:24:48 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:47.880 10:24:48 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:06:47.880 10:24:48 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:06:47.880 10:24:48 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:06:47.880 10:24:48 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:06:47.880 10:24:48 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:47.880 10:24:48 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:06:47.880 10:24:48 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:06:47.880 10:24:48 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:47.880 10:24:48 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:47.880 10:24:48 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:06:47.880 10:24:48 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:47.880 10:24:48 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:47.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.880 --rc genhtml_branch_coverage=1 00:06:47.880 --rc genhtml_function_coverage=1 00:06:47.880 --rc genhtml_legend=1 00:06:47.880 --rc geninfo_all_blocks=1 00:06:47.880 --rc geninfo_unexecuted_blocks=1 00:06:47.880 00:06:47.880 ' 00:06:47.880 10:24:48 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:47.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.880 --rc genhtml_branch_coverage=1 00:06:47.880 --rc genhtml_function_coverage=1 00:06:47.880 --rc genhtml_legend=1 00:06:47.880 --rc geninfo_all_blocks=1 00:06:47.880 --rc geninfo_unexecuted_blocks=1 00:06:47.880 00:06:47.880 ' 00:06:47.880 10:24:48 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:47.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.880 --rc genhtml_branch_coverage=1 00:06:47.880 --rc genhtml_function_coverage=1 00:06:47.880 --rc genhtml_legend=1 00:06:47.880 --rc geninfo_all_blocks=1 00:06:47.880 --rc geninfo_unexecuted_blocks=1 00:06:47.880 00:06:47.880 ' 00:06:47.880 10:24:48 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:47.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.880 --rc genhtml_branch_coverage=1 00:06:47.880 --rc genhtml_function_coverage=1 00:06:47.880 --rc genhtml_legend=1 00:06:47.880 --rc geninfo_all_blocks=1 00:06:47.880 --rc geninfo_unexecuted_blocks=1 00:06:47.880 00:06:47.880 ' 00:06:47.880 10:24:48 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:47.880 10:24:48 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:06:47.880 10:24:48 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:47.880 10:24:48 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:47.880 10:24:48 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:47.880 10:24:48 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.880 10:24:48 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.880 10:24:48 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.880 10:24:48 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:06:47.881 10:24:48 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.881 10:24:48 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:06:47.881 10:24:48 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:06:47.881 10:24:48 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:06:47.881 10:24:48 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:06:47.881 10:24:48 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:06:47.881 10:24:48 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:06:47.881 10:24:48 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:06:47.881 10:24:48 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:06:47.881 10:24:48 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:06:47.881 10:24:48 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:06:47.881 10:24:48 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:06:47.881 1+0 records in 00:06:47.881 1+0 records out 00:06:47.881 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00728125 s, 576 MB/s 00:06:47.881 10:24:48 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:06:47.881 1+0 records in 00:06:47.881 1+0 records out 00:06:47.881 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0078579 s, 534 MB/s 00:06:47.881 10:24:48 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:06:47.881 1+0 records in 00:06:47.881 1+0 records out 00:06:47.881 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00553777 s, 757 MB/s 00:06:47.881 10:24:48 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:06:47.881 10:24:48 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:47.881 10:24:48 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:47.881 10:24:48 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:06:47.881 ************************************ 00:06:47.881 START TEST dd_sparse_file_to_file 00:06:47.881 ************************************ 00:06:47.881 10:24:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1127 -- # file_to_file 00:06:47.881 10:24:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:06:47.881 10:24:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:06:47.881 10:24:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:06:47.881 10:24:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:06:47.881 10:24:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:06:47.881 10:24:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:06:47.881 10:24:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:06:47.881 10:24:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:06:47.881 10:24:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:06:47.881 10:24:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:06:47.881 [2024-11-15 10:24:48.669260] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:06:47.881 [2024-11-15 10:24:48.669358] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61384 ] 00:06:47.881 { 00:06:47.881 "subsystems": [ 00:06:47.881 { 00:06:47.881 "subsystem": "bdev", 00:06:47.881 "config": [ 00:06:47.881 { 00:06:47.881 "params": { 00:06:47.881 "block_size": 4096, 00:06:47.881 "filename": "dd_sparse_aio_disk", 00:06:47.881 "name": "dd_aio" 00:06:47.881 }, 00:06:47.881 "method": "bdev_aio_create" 00:06:47.881 }, 00:06:47.881 { 00:06:47.881 "params": { 00:06:47.881 "lvs_name": "dd_lvstore", 00:06:47.881 "bdev_name": "dd_aio" 00:06:47.881 }, 00:06:47.881 "method": "bdev_lvol_create_lvstore" 00:06:47.881 }, 00:06:47.881 { 00:06:47.881 "method": "bdev_wait_for_examine" 00:06:47.881 } 00:06:47.881 ] 00:06:47.881 } 00:06:47.881 ] 00:06:47.881 } 00:06:48.140 [2024-11-15 10:24:48.816149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.140 [2024-11-15 10:24:48.866392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.140 [2024-11-15 10:24:48.918844] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:48.398  [2024-11-15T10:24:49.251Z] Copying: 12/36 [MB] (average 923 MBps) 00:06:48.398 00:06:48.398 10:24:49 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:06:48.398 10:24:49 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:06:48.398 10:24:49 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:06:48.398 10:24:49 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:06:48.398 10:24:49 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:06:48.398 10:24:49 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:06:48.657 10:24:49 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:06:48.657 10:24:49 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:06:48.657 ************************************ 00:06:48.657 END TEST dd_sparse_file_to_file 00:06:48.657 ************************************ 00:06:48.657 10:24:49 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:06:48.657 10:24:49 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:06:48.657 00:06:48.657 real 0m0.644s 00:06:48.657 user 0m0.394s 00:06:48.657 sys 0m0.338s 00:06:48.657 10:24:49 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:48.657 10:24:49 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:06:48.657 10:24:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:06:48.657 10:24:49 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:48.657 10:24:49 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:48.657 10:24:49 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:06:48.657 ************************************ 00:06:48.657 START TEST dd_sparse_file_to_bdev 00:06:48.657 ************************************ 00:06:48.657 10:24:49 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1127 -- # file_to_bdev 00:06:48.657 10:24:49 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:06:48.657 10:24:49 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:06:48.658 10:24:49 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:06:48.658 10:24:49 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:06:48.658 10:24:49 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:06:48.658 10:24:49 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:06:48.658 10:24:49 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:48.658 10:24:49 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:48.658 [2024-11-15 10:24:49.355936] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:06:48.658 [2024-11-15 10:24:49.356418] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61430 ] 00:06:48.658 { 00:06:48.658 "subsystems": [ 00:06:48.658 { 00:06:48.658 "subsystem": "bdev", 00:06:48.658 "config": [ 00:06:48.658 { 00:06:48.658 "params": { 00:06:48.658 "block_size": 4096, 00:06:48.658 "filename": "dd_sparse_aio_disk", 00:06:48.658 "name": "dd_aio" 00:06:48.658 }, 00:06:48.658 "method": "bdev_aio_create" 00:06:48.658 }, 00:06:48.658 { 00:06:48.658 "params": { 00:06:48.658 "lvs_name": "dd_lvstore", 00:06:48.658 "lvol_name": "dd_lvol", 00:06:48.658 "size_in_mib": 36, 00:06:48.658 "thin_provision": true 00:06:48.658 }, 00:06:48.658 "method": "bdev_lvol_create" 00:06:48.658 }, 00:06:48.658 { 00:06:48.658 "method": "bdev_wait_for_examine" 00:06:48.658 } 00:06:48.658 ] 00:06:48.658 } 00:06:48.658 ] 00:06:48.658 } 00:06:48.658 [2024-11-15 10:24:49.504672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.917 [2024-11-15 10:24:49.556455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.917 [2024-11-15 10:24:49.613812] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:48.917  [2024-11-15T10:24:50.028Z] Copying: 12/36 [MB] (average 480 MBps) 00:06:49.175 00:06:49.175 00:06:49.175 real 0m0.626s 00:06:49.175 user 0m0.378s 00:06:49.175 sys 0m0.355s 00:06:49.176 10:24:49 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:49.176 ************************************ 00:06:49.176 10:24:49 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:49.176 END TEST dd_sparse_file_to_bdev 00:06:49.176 ************************************ 00:06:49.176 10:24:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:06:49.176 10:24:49 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:49.176 10:24:49 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:49.176 10:24:49 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:06:49.176 ************************************ 00:06:49.176 START TEST dd_sparse_bdev_to_file 00:06:49.176 ************************************ 00:06:49.176 10:24:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1127 -- # bdev_to_file 00:06:49.176 10:24:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:06:49.176 10:24:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:06:49.176 10:24:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:06:49.176 10:24:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:06:49.176 10:24:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:06:49.176 10:24:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:06:49.176 10:24:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:06:49.176 10:24:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:06:49.435 { 00:06:49.435 "subsystems": [ 00:06:49.435 { 00:06:49.435 "subsystem": "bdev", 00:06:49.435 "config": [ 00:06:49.435 { 00:06:49.435 "params": { 00:06:49.435 "block_size": 4096, 00:06:49.435 "filename": "dd_sparse_aio_disk", 00:06:49.435 "name": "dd_aio" 00:06:49.435 }, 00:06:49.435 "method": "bdev_aio_create" 00:06:49.435 }, 00:06:49.435 { 00:06:49.435 "method": "bdev_wait_for_examine" 00:06:49.435 } 00:06:49.435 ] 00:06:49.435 } 00:06:49.435 ] 00:06:49.435 } 00:06:49.435 [2024-11-15 10:24:50.047534] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:06:49.435 [2024-11-15 10:24:50.047634] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61468 ] 00:06:49.435 [2024-11-15 10:24:50.193456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.435 [2024-11-15 10:24:50.249111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.693 [2024-11-15 10:24:50.302506] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:49.693  [2024-11-15T10:24:50.809Z] Copying: 12/36 [MB] (average 1000 MBps) 00:06:49.956 00:06:49.956 10:24:50 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:06:49.956 10:24:50 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:06:49.956 10:24:50 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:06:49.956 10:24:50 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:06:49.956 10:24:50 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:06:49.956 10:24:50 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:06:49.956 10:24:50 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:06:49.956 10:24:50 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:06:49.956 10:24:50 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:06:49.956 10:24:50 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:06:49.956 00:06:49.956 real 0m0.651s 00:06:49.956 user 0m0.385s 00:06:49.957 sys 0m0.369s 00:06:49.957 10:24:50 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:49.957 10:24:50 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:06:49.957 ************************************ 00:06:49.957 END TEST dd_sparse_bdev_to_file 00:06:49.957 ************************************ 00:06:49.957 10:24:50 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:06:49.957 10:24:50 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:06:49.957 10:24:50 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:06:49.957 10:24:50 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:06:49.957 10:24:50 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:06:49.957 00:06:49.957 real 0m2.332s 00:06:49.957 user 0m1.319s 00:06:49.957 sys 0m1.300s 00:06:49.957 10:24:50 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:49.957 10:24:50 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:06:49.957 ************************************ 00:06:49.957 END TEST spdk_dd_sparse 00:06:49.957 ************************************ 00:06:49.957 10:24:50 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:06:49.957 10:24:50 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:49.957 10:24:50 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:49.957 10:24:50 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:49.957 ************************************ 00:06:49.957 START TEST spdk_dd_negative 00:06:49.957 ************************************ 00:06:49.957 10:24:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:06:50.227 * Looking for test storage... 00:06:50.227 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:50.227 10:24:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:50.227 10:24:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1691 -- # lcov --version 00:06:50.227 10:24:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:50.227 10:24:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:50.227 10:24:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:50.227 10:24:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:50.227 10:24:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:50.227 10:24:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:06:50.227 10:24:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:06:50.227 10:24:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:06:50.227 10:24:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:06:50.227 10:24:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:06:50.227 10:24:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:06:50.227 10:24:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:06:50.227 10:24:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:50.227 10:24:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:06:50.227 10:24:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:06:50.227 10:24:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:50.227 10:24:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:50.227 10:24:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:06:50.227 10:24:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:06:50.227 10:24:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:50.227 10:24:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:06:50.227 10:24:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:06:50.227 10:24:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:06:50.227 10:24:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:06:50.227 10:24:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:50.227 10:24:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:06:50.227 10:24:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:06:50.227 10:24:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:50.227 10:24:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:50.227 10:24:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:06:50.227 10:24:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:50.227 10:24:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:50.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.227 --rc genhtml_branch_coverage=1 00:06:50.227 --rc genhtml_function_coverage=1 00:06:50.227 --rc genhtml_legend=1 00:06:50.227 --rc geninfo_all_blocks=1 00:06:50.227 --rc geninfo_unexecuted_blocks=1 00:06:50.227 00:06:50.227 ' 00:06:50.227 10:24:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:50.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.227 --rc genhtml_branch_coverage=1 00:06:50.227 --rc genhtml_function_coverage=1 00:06:50.227 --rc genhtml_legend=1 00:06:50.227 --rc geninfo_all_blocks=1 00:06:50.227 --rc geninfo_unexecuted_blocks=1 00:06:50.227 00:06:50.227 ' 00:06:50.228 10:24:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:50.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.228 --rc genhtml_branch_coverage=1 00:06:50.228 --rc genhtml_function_coverage=1 00:06:50.228 --rc genhtml_legend=1 00:06:50.228 --rc geninfo_all_blocks=1 00:06:50.228 --rc geninfo_unexecuted_blocks=1 00:06:50.228 00:06:50.228 ' 00:06:50.228 10:24:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:50.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.228 --rc genhtml_branch_coverage=1 00:06:50.228 --rc genhtml_function_coverage=1 00:06:50.228 --rc genhtml_legend=1 00:06:50.228 --rc geninfo_all_blocks=1 00:06:50.228 --rc geninfo_unexecuted_blocks=1 00:06:50.228 00:06:50.228 ' 00:06:50.228 10:24:50 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:50.228 10:24:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:06:50.228 10:24:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:50.228 10:24:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:50.228 10:24:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:50.228 10:24:50 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.228 10:24:50 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.228 10:24:50 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.228 10:24:50 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:06:50.228 10:24:50 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.228 10:24:50 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:50.228 10:24:50 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:50.228 10:24:50 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:50.228 10:24:50 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:50.228 10:24:50 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:06:50.228 10:24:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:50.228 10:24:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:50.228 10:24:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:50.228 ************************************ 00:06:50.228 START TEST dd_invalid_arguments 00:06:50.228 ************************************ 00:06:50.228 10:24:50 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1127 -- # invalid_arguments 00:06:50.228 10:24:50 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:06:50.228 10:24:50 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # local es=0 00:06:50.228 10:24:50 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:06:50.228 10:24:50 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.228 10:24:50 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.228 10:24:50 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.228 10:24:50 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.228 10:24:50 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.228 10:24:50 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.228 10:24:50 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.228 10:24:50 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:50.228 10:24:50 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:06:50.228 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:06:50.228 00:06:50.228 CPU options: 00:06:50.228 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:06:50.228 (like [0,1,10]) 00:06:50.228 --lcores lcore to CPU mapping list. The list is in the format: 00:06:50.228 [<,lcores[@CPUs]>...] 00:06:50.228 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:06:50.228 Within the group, '-' is used for range separator, 00:06:50.228 ',' is used for single number separator. 00:06:50.228 '( )' can be omitted for single element group, 00:06:50.228 '@' can be omitted if cpus and lcores have the same value 00:06:50.228 --disable-cpumask-locks Disable CPU core lock files. 00:06:50.228 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:06:50.228 pollers in the app support interrupt mode) 00:06:50.228 -p, --main-core main (primary) core for DPDK 00:06:50.228 00:06:50.228 Configuration options: 00:06:50.228 -c, --config, --json JSON config file 00:06:50.228 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:06:50.228 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:06:50.228 --wait-for-rpc wait for RPCs to initialize subsystems 00:06:50.228 --rpcs-allowed comma-separated list of permitted RPCS 00:06:50.228 --json-ignore-init-errors don't exit on invalid config entry 00:06:50.228 00:06:50.228 Memory options: 00:06:50.228 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:06:50.228 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:06:50.228 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:06:50.228 -R, --huge-unlink unlink huge files after initialization 00:06:50.228 -n, --mem-channels number of memory channels used for DPDK 00:06:50.228 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:06:50.228 --msg-mempool-size global message memory pool size in count (default: 262143) 00:06:50.228 --no-huge run without using hugepages 00:06:50.228 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:06:50.228 -i, --shm-id shared memory ID (optional) 00:06:50.228 -g, --single-file-segments force creating just one hugetlbfs file 00:06:50.228 00:06:50.228 PCI options: 00:06:50.228 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:06:50.228 -B, --pci-blocked pci addr to block (can be used more than once) 00:06:50.228 -u, --no-pci disable PCI access 00:06:50.228 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:06:50.228 00:06:50.228 Log options: 00:06:50.228 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:06:50.228 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:06:50.228 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:06:50.228 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:06:50.228 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:06:50.228 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:06:50.228 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:06:50.228 sock_posix, spdk_aio_mgr_io, thread, trace, uring, vbdev_delay, 00:06:50.228 vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, 00:06:50.228 vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, 00:06:50.228 virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:06:50.228 --silence-noticelog disable notice level logging to stderr 00:06:50.228 00:06:50.228 Trace options: 00:06:50.228 --num-trace-entries number of trace entries for each core, must be power of 2, 00:06:50.228 setting 0 to disable trace (default 32768) 00:06:50.228 Tracepoints vary in size and can use more than one trace entry. 00:06:50.228 -e, --tpoint-group [:] 00:06:50.228 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:06:50.228 [2024-11-15 10:24:51.015130] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:06:50.228 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:06:50.228 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, blob, 00:06:50.228 bdev_raid, scheduler, all). 00:06:50.228 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:06:50.228 a tracepoint group. First tpoint inside a group can be enabled by 00:06:50.228 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:06:50.228 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:06:50.229 in /include/spdk_internal/trace_defs.h 00:06:50.229 00:06:50.229 Other options: 00:06:50.229 -h, --help show this usage 00:06:50.229 -v, --version print SPDK version 00:06:50.229 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:06:50.229 --env-context Opaque context for use of the env implementation 00:06:50.229 00:06:50.229 Application specific: 00:06:50.229 [--------- DD Options ---------] 00:06:50.229 --if Input file. Must specify either --if or --ib. 00:06:50.229 --ib Input bdev. Must specifier either --if or --ib 00:06:50.229 --of Output file. Must specify either --of or --ob. 00:06:50.229 --ob Output bdev. Must specify either --of or --ob. 00:06:50.229 --iflag Input file flags. 00:06:50.229 --oflag Output file flags. 00:06:50.229 --bs I/O unit size (default: 4096) 00:06:50.229 --qd Queue depth (default: 2) 00:06:50.229 --count I/O unit count. The number of I/O units to copy. (default: all) 00:06:50.229 --skip Skip this many I/O units at start of input. (default: 0) 00:06:50.229 --seek Skip this many I/O units at start of output. (default: 0) 00:06:50.229 --aio Force usage of AIO. (by default io_uring is used if available) 00:06:50.229 --sparse Enable hole skipping in input target 00:06:50.229 Available iflag and oflag values: 00:06:50.229 append - append mode 00:06:50.229 direct - use direct I/O for data 00:06:50.229 directory - fail unless a directory 00:06:50.229 dsync - use synchronized I/O for data 00:06:50.229 noatime - do not update access time 00:06:50.229 noctty - do not assign controlling terminal from file 00:06:50.229 nofollow - do not follow symlinks 00:06:50.229 nonblock - use non-blocking I/O 00:06:50.229 sync - use synchronized I/O for data and metadata 00:06:50.229 10:24:51 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # es=2 00:06:50.229 10:24:51 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:50.229 10:24:51 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:50.229 10:24:51 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:50.229 00:06:50.229 real 0m0.072s 00:06:50.229 user 0m0.045s 00:06:50.229 sys 0m0.025s 00:06:50.229 10:24:51 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:50.229 ************************************ 00:06:50.229 END TEST dd_invalid_arguments 00:06:50.229 ************************************ 00:06:50.229 10:24:51 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:06:50.229 10:24:51 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:06:50.229 10:24:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:50.229 10:24:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:50.229 10:24:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:50.488 ************************************ 00:06:50.488 START TEST dd_double_input 00:06:50.488 ************************************ 00:06:50.488 10:24:51 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1127 -- # double_input 00:06:50.488 10:24:51 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:06:50.488 10:24:51 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # local es=0 00:06:50.488 10:24:51 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:06:50.488 10:24:51 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.488 10:24:51 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.488 10:24:51 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.488 10:24:51 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.488 10:24:51 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.488 10:24:51 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.488 10:24:51 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.488 10:24:51 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:50.488 10:24:51 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:06:50.488 [2024-11-15 10:24:51.143611] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:06:50.488 10:24:51 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # es=22 00:06:50.488 10:24:51 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:50.488 10:24:51 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:50.488 10:24:51 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:50.488 00:06:50.488 real 0m0.075s 00:06:50.488 user 0m0.049s 00:06:50.488 sys 0m0.025s 00:06:50.488 10:24:51 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:50.488 ************************************ 00:06:50.488 END TEST dd_double_input 00:06:50.488 ************************************ 00:06:50.488 10:24:51 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:06:50.488 10:24:51 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:06:50.488 10:24:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:50.488 10:24:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:50.488 10:24:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:50.488 ************************************ 00:06:50.488 START TEST dd_double_output 00:06:50.488 ************************************ 00:06:50.488 10:24:51 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1127 -- # double_output 00:06:50.488 10:24:51 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:06:50.488 10:24:51 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # local es=0 00:06:50.488 10:24:51 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:06:50.488 10:24:51 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.488 10:24:51 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.488 10:24:51 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.488 10:24:51 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.488 10:24:51 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.488 10:24:51 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.488 10:24:51 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.488 10:24:51 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:50.488 10:24:51 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:06:50.488 [2024-11-15 10:24:51.274230] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:06:50.488 10:24:51 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # es=22 00:06:50.488 10:24:51 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:50.488 10:24:51 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:50.488 10:24:51 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:50.488 00:06:50.488 real 0m0.079s 00:06:50.488 user 0m0.053s 00:06:50.488 sys 0m0.025s 00:06:50.488 10:24:51 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:50.488 10:24:51 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:06:50.488 ************************************ 00:06:50.488 END TEST dd_double_output 00:06:50.488 ************************************ 00:06:50.488 10:24:51 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:06:50.488 10:24:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:50.488 10:24:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:50.488 10:24:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:50.747 ************************************ 00:06:50.747 START TEST dd_no_input 00:06:50.747 ************************************ 00:06:50.747 10:24:51 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1127 -- # no_input 00:06:50.747 10:24:51 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:06:50.747 10:24:51 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # local es=0 00:06:50.747 10:24:51 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:06:50.747 10:24:51 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.748 10:24:51 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.748 10:24:51 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.748 10:24:51 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.748 10:24:51 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.748 10:24:51 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.748 10:24:51 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.748 10:24:51 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:50.748 10:24:51 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:06:50.748 [2024-11-15 10:24:51.412299] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:06:50.748 10:24:51 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # es=22 00:06:50.748 10:24:51 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:50.748 10:24:51 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:50.748 10:24:51 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:50.748 00:06:50.748 real 0m0.083s 00:06:50.748 user 0m0.052s 00:06:50.748 sys 0m0.029s 00:06:50.748 10:24:51 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:50.748 10:24:51 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:06:50.748 ************************************ 00:06:50.748 END TEST dd_no_input 00:06:50.748 ************************************ 00:06:50.748 10:24:51 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:06:50.748 10:24:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:50.748 10:24:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:50.748 10:24:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:50.748 ************************************ 00:06:50.748 START TEST dd_no_output 00:06:50.748 ************************************ 00:06:50.748 10:24:51 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1127 -- # no_output 00:06:50.748 10:24:51 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:50.748 10:24:51 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # local es=0 00:06:50.748 10:24:51 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:50.748 10:24:51 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.748 10:24:51 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.748 10:24:51 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.748 10:24:51 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.748 10:24:51 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.748 10:24:51 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.748 10:24:51 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.748 10:24:51 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:50.748 10:24:51 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:50.748 [2024-11-15 10:24:51.535106] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:06:50.748 10:24:51 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # es=22 00:06:50.748 10:24:51 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:50.748 10:24:51 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:50.748 10:24:51 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:50.748 00:06:50.748 real 0m0.071s 00:06:50.748 user 0m0.044s 00:06:50.748 sys 0m0.026s 00:06:50.748 10:24:51 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:50.748 10:24:51 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:06:50.748 ************************************ 00:06:50.748 END TEST dd_no_output 00:06:50.748 ************************************ 00:06:50.748 10:24:51 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:06:50.748 10:24:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:50.748 10:24:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:50.748 10:24:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:51.007 ************************************ 00:06:51.007 START TEST dd_wrong_blocksize 00:06:51.007 ************************************ 00:06:51.007 10:24:51 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1127 -- # wrong_blocksize 00:06:51.007 10:24:51 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:06:51.007 10:24:51 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:06:51.007 10:24:51 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:06:51.007 10:24:51 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.007 10:24:51 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:51.007 10:24:51 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.007 10:24:51 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:51.007 10:24:51 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.007 10:24:51 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:51.007 10:24:51 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.007 10:24:51 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:51.007 10:24:51 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:06:51.007 [2024-11-15 10:24:51.661926] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:06:51.007 10:24:51 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # es=22 00:06:51.007 10:24:51 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:51.007 10:24:51 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:51.007 10:24:51 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:51.007 00:06:51.007 real 0m0.078s 00:06:51.007 user 0m0.046s 00:06:51.007 sys 0m0.031s 00:06:51.007 10:24:51 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:51.007 10:24:51 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:06:51.007 ************************************ 00:06:51.007 END TEST dd_wrong_blocksize 00:06:51.007 ************************************ 00:06:51.007 10:24:51 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:06:51.007 10:24:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:51.007 10:24:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:51.007 10:24:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:51.007 ************************************ 00:06:51.007 START TEST dd_smaller_blocksize 00:06:51.007 ************************************ 00:06:51.007 10:24:51 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1127 -- # smaller_blocksize 00:06:51.007 10:24:51 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:06:51.007 10:24:51 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:06:51.007 10:24:51 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:06:51.007 10:24:51 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.007 10:24:51 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:51.007 10:24:51 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.007 10:24:51 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:51.007 10:24:51 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.007 10:24:51 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:51.007 10:24:51 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.007 10:24:51 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:51.007 10:24:51 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:06:51.007 [2024-11-15 10:24:51.790883] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:06:51.007 [2024-11-15 10:24:51.790983] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61695 ] 00:06:51.266 [2024-11-15 10:24:51.941471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.266 [2024-11-15 10:24:51.991774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.266 [2024-11-15 10:24:52.047748] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:51.525 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:06:51.784 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:06:51.784 [2024-11-15 10:24:52.624723] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:06:51.784 [2024-11-15 10:24:52.624807] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:52.044 [2024-11-15 10:24:52.746120] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:52.044 10:24:52 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # es=244 00:06:52.044 10:24:52 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:52.044 10:24:52 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@662 -- # es=116 00:06:52.044 10:24:52 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # case "$es" in 00:06:52.044 10:24:52 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@670 -- # es=1 00:06:52.044 10:24:52 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:52.044 00:06:52.044 real 0m1.078s 00:06:52.044 user 0m0.399s 00:06:52.044 sys 0m0.572s 00:06:52.044 10:24:52 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:52.044 ************************************ 00:06:52.044 END TEST dd_smaller_blocksize 00:06:52.044 10:24:52 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:06:52.044 ************************************ 00:06:52.044 10:24:52 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:06:52.044 10:24:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:52.044 10:24:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:52.044 10:24:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:52.044 ************************************ 00:06:52.044 START TEST dd_invalid_count 00:06:52.044 ************************************ 00:06:52.044 10:24:52 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1127 -- # invalid_count 00:06:52.044 10:24:52 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:06:52.044 10:24:52 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # local es=0 00:06:52.044 10:24:52 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:06:52.044 10:24:52 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.044 10:24:52 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.044 10:24:52 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.044 10:24:52 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.044 10:24:52 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.044 10:24:52 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.044 10:24:52 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.044 10:24:52 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:52.044 10:24:52 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:06:52.303 [2024-11-15 10:24:52.932668] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:06:52.303 10:24:52 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # es=22 00:06:52.303 10:24:52 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:52.303 10:24:52 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:52.303 10:24:52 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:52.303 00:06:52.303 real 0m0.088s 00:06:52.303 user 0m0.049s 00:06:52.303 sys 0m0.037s 00:06:52.303 10:24:52 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:52.303 ************************************ 00:06:52.303 END TEST dd_invalid_count 00:06:52.303 ************************************ 00:06:52.303 10:24:52 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:06:52.303 10:24:52 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:06:52.303 10:24:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:52.303 10:24:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:52.303 10:24:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:52.303 ************************************ 00:06:52.303 START TEST dd_invalid_oflag 00:06:52.303 ************************************ 00:06:52.303 10:24:53 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1127 -- # invalid_oflag 00:06:52.303 10:24:53 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:06:52.303 10:24:53 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # local es=0 00:06:52.303 10:24:53 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:06:52.303 10:24:53 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.303 10:24:53 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.303 10:24:53 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.303 10:24:53 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.303 10:24:53 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.303 10:24:53 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.303 10:24:53 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.303 10:24:53 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:52.303 10:24:53 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:06:52.303 [2024-11-15 10:24:53.055780] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:06:52.303 10:24:53 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # es=22 00:06:52.303 10:24:53 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:52.303 10:24:53 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:52.303 10:24:53 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:52.303 00:06:52.303 real 0m0.065s 00:06:52.303 user 0m0.042s 00:06:52.303 sys 0m0.022s 00:06:52.303 10:24:53 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:52.303 10:24:53 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:06:52.303 ************************************ 00:06:52.303 END TEST dd_invalid_oflag 00:06:52.303 ************************************ 00:06:52.303 10:24:53 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:06:52.303 10:24:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:52.303 10:24:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:52.303 10:24:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:52.303 ************************************ 00:06:52.303 START TEST dd_invalid_iflag 00:06:52.303 ************************************ 00:06:52.303 10:24:53 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1127 -- # invalid_iflag 00:06:52.303 10:24:53 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:06:52.303 10:24:53 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # local es=0 00:06:52.303 10:24:53 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:06:52.303 10:24:53 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.303 10:24:53 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.303 10:24:53 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.303 10:24:53 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.303 10:24:53 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.303 10:24:53 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.303 10:24:53 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.303 10:24:53 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:52.303 10:24:53 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:06:52.562 [2024-11-15 10:24:53.180013] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:06:52.562 10:24:53 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # es=22 00:06:52.562 10:24:53 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:52.562 10:24:53 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:52.562 10:24:53 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:52.562 00:06:52.562 real 0m0.077s 00:06:52.562 user 0m0.047s 00:06:52.562 sys 0m0.030s 00:06:52.562 10:24:53 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:52.562 10:24:53 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:06:52.562 ************************************ 00:06:52.562 END TEST dd_invalid_iflag 00:06:52.562 ************************************ 00:06:52.562 10:24:53 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:06:52.562 10:24:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:52.562 10:24:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:52.562 10:24:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:52.562 ************************************ 00:06:52.562 START TEST dd_unknown_flag 00:06:52.562 ************************************ 00:06:52.562 10:24:53 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1127 -- # unknown_flag 00:06:52.562 10:24:53 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:06:52.562 10:24:53 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # local es=0 00:06:52.563 10:24:53 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:06:52.563 10:24:53 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.563 10:24:53 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.563 10:24:53 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.563 10:24:53 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.563 10:24:53 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.563 10:24:53 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.563 10:24:53 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.563 10:24:53 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:52.563 10:24:53 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:06:52.563 [2024-11-15 10:24:53.308866] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:06:52.563 [2024-11-15 10:24:53.308976] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61792 ] 00:06:52.821 [2024-11-15 10:24:53.457310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.821 [2024-11-15 10:24:53.518209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.821 [2024-11-15 10:24:53.571498] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:52.821 [2024-11-15 10:24:53.607808] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:06:52.821 [2024-11-15 10:24:53.607892] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:52.821 [2024-11-15 10:24:53.607946] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:06:52.821 [2024-11-15 10:24:53.607961] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:52.821 [2024-11-15 10:24:53.608193] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:06:52.821 [2024-11-15 10:24:53.608225] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:52.821 [2024-11-15 10:24:53.608273] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:06:52.821 [2024-11-15 10:24:53.608283] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:06:53.080 [2024-11-15 10:24:53.724242] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:53.080 10:24:53 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # es=234 00:06:53.080 10:24:53 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:53.080 10:24:53 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@662 -- # es=106 00:06:53.080 10:24:53 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # case "$es" in 00:06:53.080 10:24:53 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@670 -- # es=1 00:06:53.080 10:24:53 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:53.080 00:06:53.080 real 0m0.535s 00:06:53.080 user 0m0.283s 00:06:53.080 sys 0m0.158s 00:06:53.080 10:24:53 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:53.080 10:24:53 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:06:53.080 ************************************ 00:06:53.080 END TEST dd_unknown_flag 00:06:53.080 ************************************ 00:06:53.080 10:24:53 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:06:53.080 10:24:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:53.080 10:24:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:53.080 10:24:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:53.080 ************************************ 00:06:53.080 START TEST dd_invalid_json 00:06:53.080 ************************************ 00:06:53.080 10:24:53 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1127 -- # invalid_json 00:06:53.080 10:24:53 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:06:53.080 10:24:53 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:06:53.080 10:24:53 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # local es=0 00:06:53.080 10:24:53 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:06:53.080 10:24:53 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:53.080 10:24:53 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:53.080 10:24:53 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:53.080 10:24:53 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:53.080 10:24:53 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:53.080 10:24:53 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:53.080 10:24:53 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:53.080 10:24:53 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:53.080 10:24:53 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:06:53.080 [2024-11-15 10:24:53.890839] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:06:53.080 [2024-11-15 10:24:53.890931] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61821 ] 00:06:53.339 [2024-11-15 10:24:54.037601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.339 [2024-11-15 10:24:54.078117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.339 [2024-11-15 10:24:54.078183] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:06:53.339 [2024-11-15 10:24:54.078198] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:53.339 [2024-11-15 10:24:54.078206] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:53.339 [2024-11-15 10:24:54.078239] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:53.339 10:24:54 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # es=234 00:06:53.339 10:24:54 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:53.339 10:24:54 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@662 -- # es=106 00:06:53.339 10:24:54 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # case "$es" in 00:06:53.339 10:24:54 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@670 -- # es=1 00:06:53.339 10:24:54 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:53.339 00:06:53.339 real 0m0.297s 00:06:53.339 user 0m0.141s 00:06:53.339 sys 0m0.055s 00:06:53.339 10:24:54 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:53.339 10:24:54 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:06:53.339 ************************************ 00:06:53.339 END TEST dd_invalid_json 00:06:53.339 ************************************ 00:06:53.339 10:24:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:06:53.339 10:24:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:53.339 10:24:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:53.339 10:24:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:53.339 ************************************ 00:06:53.339 START TEST dd_invalid_seek 00:06:53.339 ************************************ 00:06:53.339 10:24:54 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1127 -- # invalid_seek 00:06:53.339 10:24:54 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:06:53.339 10:24:54 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:06:53.339 10:24:54 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:06:53.339 10:24:54 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:06:53.339 10:24:54 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:06:53.339 10:24:54 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:06:53.339 10:24:54 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:06:53.339 10:24:54 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:06:53.339 10:24:54 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@650 -- # local es=0 00:06:53.339 10:24:54 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:06:53.339 10:24:54 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:06:53.339 10:24:54 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:53.339 10:24:54 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:06:53.339 10:24:54 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:53.339 10:24:54 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:53.339 10:24:54 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:53.340 10:24:54 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:53.340 10:24:54 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:53.340 10:24:54 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:53.340 10:24:54 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:53.340 10:24:54 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:06:53.599 { 00:06:53.599 "subsystems": [ 00:06:53.599 { 00:06:53.599 "subsystem": "bdev", 00:06:53.599 "config": [ 00:06:53.599 { 00:06:53.599 "params": { 00:06:53.599 "block_size": 512, 00:06:53.599 "num_blocks": 512, 00:06:53.599 "name": "malloc0" 00:06:53.599 }, 00:06:53.599 "method": "bdev_malloc_create" 00:06:53.599 }, 00:06:53.599 { 00:06:53.599 "params": { 00:06:53.599 "block_size": 512, 00:06:53.599 "num_blocks": 512, 00:06:53.599 "name": "malloc1" 00:06:53.599 }, 00:06:53.599 "method": "bdev_malloc_create" 00:06:53.599 }, 00:06:53.599 { 00:06:53.599 "method": "bdev_wait_for_examine" 00:06:53.599 } 00:06:53.599 ] 00:06:53.599 } 00:06:53.599 ] 00:06:53.599 } 00:06:53.599 [2024-11-15 10:24:54.242846] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:06:53.599 [2024-11-15 10:24:54.242937] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61850 ] 00:06:53.599 [2024-11-15 10:24:54.396488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.858 [2024-11-15 10:24:54.451700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.858 [2024-11-15 10:24:54.510639] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:53.858 [2024-11-15 10:24:54.579544] spdk_dd.c:1145:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:06:53.858 [2024-11-15 10:24:54.579623] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:53.858 [2024-11-15 10:24:54.704389] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:54.117 10:24:54 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@653 -- # es=228 00:06:54.117 10:24:54 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:54.117 10:24:54 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@662 -- # es=100 00:06:54.117 10:24:54 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # case "$es" in 00:06:54.117 10:24:54 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@670 -- # es=1 00:06:54.117 10:24:54 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:54.117 00:06:54.117 real 0m0.587s 00:06:54.117 user 0m0.391s 00:06:54.117 sys 0m0.151s 00:06:54.117 10:24:54 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:54.117 ************************************ 00:06:54.117 END TEST dd_invalid_seek 00:06:54.117 ************************************ 00:06:54.117 10:24:54 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:06:54.117 10:24:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:06:54.117 10:24:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:54.117 10:24:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:54.117 10:24:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:54.117 ************************************ 00:06:54.117 START TEST dd_invalid_skip 00:06:54.117 ************************************ 00:06:54.117 10:24:54 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1127 -- # invalid_skip 00:06:54.117 10:24:54 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:06:54.117 10:24:54 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:06:54.117 10:24:54 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:06:54.117 10:24:54 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:06:54.117 10:24:54 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:06:54.117 10:24:54 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:06:54.117 10:24:54 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:06:54.117 10:24:54 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@650 -- # local es=0 00:06:54.117 10:24:54 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:06:54.117 10:24:54 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:06:54.117 10:24:54 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:06:54.117 10:24:54 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:54.117 10:24:54 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:06:54.117 10:24:54 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:54.117 10:24:54 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:54.117 10:24:54 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:54.117 10:24:54 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:54.117 10:24:54 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:54.117 10:24:54 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:54.117 10:24:54 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:54.117 10:24:54 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:06:54.117 { 00:06:54.117 "subsystems": [ 00:06:54.117 { 00:06:54.117 "subsystem": "bdev", 00:06:54.117 "config": [ 00:06:54.117 { 00:06:54.117 "params": { 00:06:54.117 "block_size": 512, 00:06:54.117 "num_blocks": 512, 00:06:54.117 "name": "malloc0" 00:06:54.117 }, 00:06:54.117 "method": "bdev_malloc_create" 00:06:54.117 }, 00:06:54.117 { 00:06:54.117 "params": { 00:06:54.117 "block_size": 512, 00:06:54.117 "num_blocks": 512, 00:06:54.117 "name": "malloc1" 00:06:54.117 }, 00:06:54.117 "method": "bdev_malloc_create" 00:06:54.117 }, 00:06:54.117 { 00:06:54.117 "method": "bdev_wait_for_examine" 00:06:54.117 } 00:06:54.117 ] 00:06:54.117 } 00:06:54.117 ] 00:06:54.117 } 00:06:54.117 [2024-11-15 10:24:54.884160] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:06:54.117 [2024-11-15 10:24:54.884258] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61884 ] 00:06:54.376 [2024-11-15 10:24:55.028100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.376 [2024-11-15 10:24:55.078133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.376 [2024-11-15 10:24:55.130258] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:54.376 [2024-11-15 10:24:55.196452] spdk_dd.c:1102:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:06:54.376 [2024-11-15 10:24:55.196740] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:54.634 [2024-11-15 10:24:55.319704] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:54.634 10:24:55 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@653 -- # es=228 00:06:54.635 10:24:55 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:54.635 ************************************ 00:06:54.635 END TEST dd_invalid_skip 00:06:54.635 ************************************ 00:06:54.635 10:24:55 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@662 -- # es=100 00:06:54.635 10:24:55 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # case "$es" in 00:06:54.635 10:24:55 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@670 -- # es=1 00:06:54.635 10:24:55 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:54.635 00:06:54.635 real 0m0.562s 00:06:54.635 user 0m0.355s 00:06:54.635 sys 0m0.163s 00:06:54.635 10:24:55 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:54.635 10:24:55 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:06:54.635 10:24:55 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:06:54.635 10:24:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:54.635 10:24:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:54.635 10:24:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:54.635 ************************************ 00:06:54.635 START TEST dd_invalid_input_count 00:06:54.635 ************************************ 00:06:54.635 10:24:55 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1127 -- # invalid_input_count 00:06:54.635 10:24:55 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:06:54.635 10:24:55 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:06:54.635 10:24:55 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:06:54.635 10:24:55 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:06:54.635 10:24:55 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:06:54.635 10:24:55 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:06:54.635 10:24:55 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:06:54.635 10:24:55 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:06:54.635 10:24:55 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@650 -- # local es=0 00:06:54.635 10:24:55 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:06:54.635 10:24:55 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:06:54.635 10:24:55 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:06:54.635 10:24:55 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:54.635 10:24:55 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:54.635 10:24:55 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:54.635 10:24:55 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:54.635 10:24:55 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:54.635 10:24:55 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:54.635 10:24:55 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:54.635 10:24:55 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:54.635 10:24:55 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:06:54.894 { 00:06:54.894 "subsystems": [ 00:06:54.894 { 00:06:54.894 "subsystem": "bdev", 00:06:54.894 "config": [ 00:06:54.894 { 00:06:54.894 "params": { 00:06:54.894 "block_size": 512, 00:06:54.894 "num_blocks": 512, 00:06:54.894 "name": "malloc0" 00:06:54.894 }, 00:06:54.894 "method": "bdev_malloc_create" 00:06:54.894 }, 00:06:54.894 { 00:06:54.894 "params": { 00:06:54.894 "block_size": 512, 00:06:54.894 "num_blocks": 512, 00:06:54.894 "name": "malloc1" 00:06:54.894 }, 00:06:54.894 "method": "bdev_malloc_create" 00:06:54.894 }, 00:06:54.894 { 00:06:54.894 "method": "bdev_wait_for_examine" 00:06:54.894 } 00:06:54.894 ] 00:06:54.894 } 00:06:54.894 ] 00:06:54.894 } 00:06:54.894 [2024-11-15 10:24:55.501012] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:06:54.894 [2024-11-15 10:24:55.501117] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61917 ] 00:06:54.894 [2024-11-15 10:24:55.647777] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.894 [2024-11-15 10:24:55.702157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.154 [2024-11-15 10:24:55.755729] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:55.154 [2024-11-15 10:24:55.817256] spdk_dd.c:1110:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:06:55.154 [2024-11-15 10:24:55.817332] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:55.154 [2024-11-15 10:24:55.930579] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:55.154 10:24:55 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@653 -- # es=228 00:06:55.154 10:24:55 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:55.154 10:24:55 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@662 -- # es=100 00:06:55.154 10:24:55 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # case "$es" in 00:06:55.154 10:24:55 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@670 -- # es=1 00:06:55.154 ************************************ 00:06:55.154 END TEST dd_invalid_input_count 00:06:55.154 ************************************ 00:06:55.154 10:24:55 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:55.154 00:06:55.154 real 0m0.557s 00:06:55.154 user 0m0.355s 00:06:55.154 sys 0m0.164s 00:06:55.154 10:24:55 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:55.154 10:24:55 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:06:55.413 10:24:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:06:55.413 10:24:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:55.413 10:24:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:55.413 10:24:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:55.413 ************************************ 00:06:55.413 START TEST dd_invalid_output_count 00:06:55.413 ************************************ 00:06:55.413 10:24:56 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1127 -- # invalid_output_count 00:06:55.413 10:24:56 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:06:55.413 10:24:56 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:06:55.413 10:24:56 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:06:55.414 10:24:56 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:06:55.414 10:24:56 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@650 -- # local es=0 00:06:55.414 10:24:56 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:06:55.414 10:24:56 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:06:55.414 10:24:56 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:55.414 10:24:56 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:06:55.414 10:24:56 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:06:55.414 10:24:56 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:55.414 10:24:56 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:55.414 10:24:56 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:55.414 10:24:56 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:55.414 10:24:56 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:55.414 10:24:56 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:55.414 10:24:56 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:55.414 10:24:56 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:06:55.414 { 00:06:55.414 "subsystems": [ 00:06:55.414 { 00:06:55.414 "subsystem": "bdev", 00:06:55.414 "config": [ 00:06:55.414 { 00:06:55.414 "params": { 00:06:55.414 "block_size": 512, 00:06:55.414 "num_blocks": 512, 00:06:55.414 "name": "malloc0" 00:06:55.414 }, 00:06:55.414 "method": "bdev_malloc_create" 00:06:55.414 }, 00:06:55.414 { 00:06:55.414 "method": "bdev_wait_for_examine" 00:06:55.414 } 00:06:55.414 ] 00:06:55.414 } 00:06:55.414 ] 00:06:55.414 } 00:06:55.414 [2024-11-15 10:24:56.103806] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:06:55.414 [2024-11-15 10:24:56.103926] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61951 ] 00:06:55.414 [2024-11-15 10:24:56.250732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.672 [2024-11-15 10:24:56.299344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.672 [2024-11-15 10:24:56.351728] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:55.672 [2024-11-15 10:24:56.406574] spdk_dd.c:1152:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:06:55.672 [2024-11-15 10:24:56.406675] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:55.933 [2024-11-15 10:24:56.526699] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:55.933 10:24:56 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@653 -- # es=228 00:06:55.933 10:24:56 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:55.933 10:24:56 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@662 -- # es=100 00:06:55.933 10:24:56 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # case "$es" in 00:06:55.933 10:24:56 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@670 -- # es=1 00:06:55.933 10:24:56 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:55.933 00:06:55.933 real 0m0.551s 00:06:55.933 user 0m0.347s 00:06:55.933 sys 0m0.158s 00:06:55.933 10:24:56 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:55.933 10:24:56 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:06:55.933 ************************************ 00:06:55.933 END TEST dd_invalid_output_count 00:06:55.933 ************************************ 00:06:55.933 10:24:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:06:55.933 10:24:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:55.933 10:24:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:55.933 10:24:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:55.933 ************************************ 00:06:55.933 START TEST dd_bs_not_multiple 00:06:55.933 ************************************ 00:06:55.933 10:24:56 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1127 -- # bs_not_multiple 00:06:55.933 10:24:56 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:06:55.933 10:24:56 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:06:55.933 10:24:56 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:06:55.933 10:24:56 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:06:55.933 10:24:56 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:06:55.933 10:24:56 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:06:55.933 10:24:56 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:06:55.933 10:24:56 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@650 -- # local es=0 00:06:55.933 10:24:56 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:06:55.933 10:24:56 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:55.933 10:24:56 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:06:55.933 10:24:56 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:06:55.933 10:24:56 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:06:55.933 10:24:56 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:55.933 10:24:56 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:55.933 10:24:56 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:55.933 10:24:56 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:55.933 10:24:56 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:55.933 10:24:56 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:55.933 10:24:56 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:55.933 10:24:56 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:06:55.933 [2024-11-15 10:24:56.695541] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:06:55.933 [2024-11-15 10:24:56.695665] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61988 ] 00:06:55.933 { 00:06:55.933 "subsystems": [ 00:06:55.933 { 00:06:55.933 "subsystem": "bdev", 00:06:55.933 "config": [ 00:06:55.933 { 00:06:55.933 "params": { 00:06:55.933 "block_size": 512, 00:06:55.933 "num_blocks": 512, 00:06:55.933 "name": "malloc0" 00:06:55.933 }, 00:06:55.933 "method": "bdev_malloc_create" 00:06:55.933 }, 00:06:55.933 { 00:06:55.933 "params": { 00:06:55.933 "block_size": 512, 00:06:55.933 "num_blocks": 512, 00:06:55.933 "name": "malloc1" 00:06:55.933 }, 00:06:55.933 "method": "bdev_malloc_create" 00:06:55.933 }, 00:06:55.933 { 00:06:55.933 "method": "bdev_wait_for_examine" 00:06:55.933 } 00:06:55.933 ] 00:06:55.933 } 00:06:55.933 ] 00:06:55.933 } 00:06:56.193 [2024-11-15 10:24:56.839136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.193 [2024-11-15 10:24:56.886654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.193 [2024-11-15 10:24:56.939781] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:56.193 [2024-11-15 10:24:57.001030] spdk_dd.c:1168:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:06:56.193 [2024-11-15 10:24:57.001120] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:56.451 [2024-11-15 10:24:57.117127] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:56.451 10:24:57 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@653 -- # es=234 00:06:56.451 10:24:57 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:56.451 10:24:57 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@662 -- # es=106 00:06:56.451 10:24:57 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # case "$es" in 00:06:56.451 10:24:57 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@670 -- # es=1 00:06:56.451 10:24:57 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:56.451 ************************************ 00:06:56.451 END TEST dd_bs_not_multiple 00:06:56.451 ************************************ 00:06:56.451 00:06:56.451 real 0m0.538s 00:06:56.451 user 0m0.348s 00:06:56.451 sys 0m0.151s 00:06:56.451 10:24:57 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:56.451 10:24:57 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:06:56.451 00:06:56.451 real 0m6.449s 00:06:56.451 user 0m3.430s 00:06:56.451 sys 0m2.416s 00:06:56.451 10:24:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:56.451 ************************************ 00:06:56.451 END TEST spdk_dd_negative 00:06:56.451 ************************************ 00:06:56.451 10:24:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:56.451 ************************************ 00:06:56.451 END TEST spdk_dd 00:06:56.451 ************************************ 00:06:56.451 00:06:56.451 real 1m16.354s 00:06:56.451 user 0m48.393s 00:06:56.451 sys 0m34.105s 00:06:56.451 10:24:57 spdk_dd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:56.451 10:24:57 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:56.451 10:24:57 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:56.451 10:24:57 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:06:56.451 10:24:57 -- spdk/autotest.sh@256 -- # timing_exit lib 00:06:56.451 10:24:57 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:56.451 10:24:57 -- common/autotest_common.sh@10 -- # set +x 00:06:56.710 10:24:57 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:06:56.710 10:24:57 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:06:56.710 10:24:57 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:06:56.710 10:24:57 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:06:56.710 10:24:57 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:06:56.710 10:24:57 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:06:56.710 10:24:57 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:56.710 10:24:57 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:56.710 10:24:57 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:56.710 10:24:57 -- common/autotest_common.sh@10 -- # set +x 00:06:56.710 ************************************ 00:06:56.710 START TEST nvmf_tcp 00:06:56.710 ************************************ 00:06:56.710 10:24:57 nvmf_tcp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:56.710 * Looking for test storage... 00:06:56.711 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:06:56.711 10:24:57 nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:56.711 10:24:57 nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:06:56.711 10:24:57 nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:56.711 10:24:57 nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:56.711 10:24:57 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:56.711 10:24:57 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:56.711 10:24:57 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:56.711 10:24:57 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:56.711 10:24:57 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:56.711 10:24:57 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:56.711 10:24:57 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:56.711 10:24:57 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:56.711 10:24:57 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:56.711 10:24:57 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:56.711 10:24:57 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:56.711 10:24:57 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:56.711 10:24:57 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:56.711 10:24:57 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:56.711 10:24:57 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:56.711 10:24:57 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:56.711 10:24:57 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:56.711 10:24:57 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:56.711 10:24:57 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:56.711 10:24:57 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:56.711 10:24:57 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:56.711 10:24:57 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:56.711 10:24:57 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:56.711 10:24:57 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:56.711 10:24:57 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:56.711 10:24:57 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:56.711 10:24:57 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:56.711 10:24:57 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:56.711 10:24:57 nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:56.711 10:24:57 nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:56.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.711 --rc genhtml_branch_coverage=1 00:06:56.711 --rc genhtml_function_coverage=1 00:06:56.711 --rc genhtml_legend=1 00:06:56.711 --rc geninfo_all_blocks=1 00:06:56.711 --rc geninfo_unexecuted_blocks=1 00:06:56.711 00:06:56.711 ' 00:06:56.711 10:24:57 nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:56.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.711 --rc genhtml_branch_coverage=1 00:06:56.711 --rc genhtml_function_coverage=1 00:06:56.711 --rc genhtml_legend=1 00:06:56.711 --rc geninfo_all_blocks=1 00:06:56.711 --rc geninfo_unexecuted_blocks=1 00:06:56.711 00:06:56.711 ' 00:06:56.711 10:24:57 nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:56.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.711 --rc genhtml_branch_coverage=1 00:06:56.711 --rc genhtml_function_coverage=1 00:06:56.711 --rc genhtml_legend=1 00:06:56.711 --rc geninfo_all_blocks=1 00:06:56.711 --rc geninfo_unexecuted_blocks=1 00:06:56.711 00:06:56.711 ' 00:06:56.711 10:24:57 nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:56.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.711 --rc genhtml_branch_coverage=1 00:06:56.711 --rc genhtml_function_coverage=1 00:06:56.711 --rc genhtml_legend=1 00:06:56.711 --rc geninfo_all_blocks=1 00:06:56.711 --rc geninfo_unexecuted_blocks=1 00:06:56.711 00:06:56.711 ' 00:06:56.711 10:24:57 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:56.711 10:24:57 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:56.711 10:24:57 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:56.711 10:24:57 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:56.711 10:24:57 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:56.711 10:24:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:56.711 ************************************ 00:06:56.711 START TEST nvmf_target_core 00:06:56.711 ************************************ 00:06:56.711 10:24:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:56.971 * Looking for test storage... 00:06:56.971 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:06:56.971 10:24:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:56.971 10:24:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:56.971 10:24:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:06:56.971 10:24:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:56.971 10:24:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:56.971 10:24:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:56.971 10:24:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:56.971 10:24:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:56.971 10:24:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:56.971 10:24:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:56.971 10:24:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:56.971 10:24:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:56.971 10:24:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:56.971 10:24:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:56.971 10:24:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:56.971 10:24:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:56.971 10:24:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:56.971 10:24:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:56.971 10:24:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:56.971 10:24:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:56.971 10:24:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:56.971 10:24:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:56.971 10:24:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:56.971 10:24:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:56.971 10:24:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:56.971 10:24:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:56.971 10:24:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:56.971 10:24:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:56.971 10:24:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:56.971 10:24:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:56.971 10:24:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:56.971 10:24:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:56.971 10:24:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:56.971 10:24:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:56.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.971 --rc genhtml_branch_coverage=1 00:06:56.971 --rc genhtml_function_coverage=1 00:06:56.971 --rc genhtml_legend=1 00:06:56.971 --rc geninfo_all_blocks=1 00:06:56.971 --rc geninfo_unexecuted_blocks=1 00:06:56.971 00:06:56.971 ' 00:06:56.971 10:24:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:56.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.971 --rc genhtml_branch_coverage=1 00:06:56.971 --rc genhtml_function_coverage=1 00:06:56.971 --rc genhtml_legend=1 00:06:56.971 --rc geninfo_all_blocks=1 00:06:56.971 --rc geninfo_unexecuted_blocks=1 00:06:56.971 00:06:56.971 ' 00:06:56.971 10:24:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:56.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.971 --rc genhtml_branch_coverage=1 00:06:56.971 --rc genhtml_function_coverage=1 00:06:56.971 --rc genhtml_legend=1 00:06:56.971 --rc geninfo_all_blocks=1 00:06:56.971 --rc geninfo_unexecuted_blocks=1 00:06:56.971 00:06:56.971 ' 00:06:56.971 10:24:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:56.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.971 --rc genhtml_branch_coverage=1 00:06:56.971 --rc genhtml_function_coverage=1 00:06:56.971 --rc genhtml_legend=1 00:06:56.971 --rc geninfo_all_blocks=1 00:06:56.971 --rc geninfo_unexecuted_blocks=1 00:06:56.971 00:06:56.971 ' 00:06:56.971 10:24:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:56.971 10:24:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:56.971 10:24:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:56.971 10:24:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:56.971 10:24:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:56.971 10:24:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:56.971 10:24:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:56.971 10:24:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:56.971 10:24:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:56.971 10:24:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:56.971 10:24:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:56.971 10:24:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:56.971 10:24:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:56.971 10:24:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:56.971 10:24:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:06:56.971 10:24:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:06:56.971 10:24:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:56.972 10:24:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:56.972 10:24:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:56.972 10:24:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:56.972 10:24:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:56.972 10:24:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:56.972 10:24:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:56.972 10:24:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:56.972 10:24:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:56.972 10:24:57 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.972 10:24:57 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.972 10:24:57 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.972 10:24:57 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:56.972 10:24:57 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.972 10:24:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:56.972 10:24:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:56.972 10:24:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:56.972 10:24:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:56.972 10:24:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:56.972 10:24:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:56.972 10:24:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:56.972 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:56.972 10:24:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:56.972 10:24:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:56.972 10:24:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:56.972 10:24:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:56.972 10:24:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:56.972 10:24:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:06:56.972 10:24:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:56.972 10:24:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:56.972 10:24:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:56.972 10:24:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:56.972 ************************************ 00:06:56.972 START TEST nvmf_host_management 00:06:56.972 ************************************ 00:06:56.972 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:56.972 * Looking for test storage... 00:06:57.232 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:57.232 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:57.232 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:57.232 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:06:57.232 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:57.232 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:57.232 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:57.232 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:57.232 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:57.232 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:57.232 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:57.232 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:57.232 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:57.232 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:57.232 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:57.232 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:57.232 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:57.232 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:57.232 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:57.232 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:57.232 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:57.232 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:57.232 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:57.232 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:57.232 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:57.232 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:57.232 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:57.232 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:57.232 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:57.232 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:57.232 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:57.232 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:57.232 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:57.232 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:57.232 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:57.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.232 --rc genhtml_branch_coverage=1 00:06:57.232 --rc genhtml_function_coverage=1 00:06:57.232 --rc genhtml_legend=1 00:06:57.232 --rc geninfo_all_blocks=1 00:06:57.232 --rc geninfo_unexecuted_blocks=1 00:06:57.232 00:06:57.232 ' 00:06:57.232 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:57.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.232 --rc genhtml_branch_coverage=1 00:06:57.232 --rc genhtml_function_coverage=1 00:06:57.232 --rc genhtml_legend=1 00:06:57.232 --rc geninfo_all_blocks=1 00:06:57.232 --rc geninfo_unexecuted_blocks=1 00:06:57.232 00:06:57.232 ' 00:06:57.232 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:57.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.232 --rc genhtml_branch_coverage=1 00:06:57.232 --rc genhtml_function_coverage=1 00:06:57.232 --rc genhtml_legend=1 00:06:57.232 --rc geninfo_all_blocks=1 00:06:57.232 --rc geninfo_unexecuted_blocks=1 00:06:57.232 00:06:57.232 ' 00:06:57.232 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:57.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.232 --rc genhtml_branch_coverage=1 00:06:57.232 --rc genhtml_function_coverage=1 00:06:57.232 --rc genhtml_legend=1 00:06:57.232 --rc geninfo_all_blocks=1 00:06:57.232 --rc geninfo_unexecuted_blocks=1 00:06:57.232 00:06:57.232 ' 00:06:57.232 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:57.232 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:57.232 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:57.232 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:57.232 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:57.232 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:57.232 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:57.232 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:57.232 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:57.232 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:57.232 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:57.232 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:57.232 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:06:57.232 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:06:57.232 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:57.232 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:57.232 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:57.232 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:57.232 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:57.232 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:57.232 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:57.232 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:57.232 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:57.233 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.233 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.233 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.233 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:57.233 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.233 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:57.233 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:57.233 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:57.233 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:57.233 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:57.233 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:57.233 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:57.233 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:57.233 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:57.233 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:57.233 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:57.233 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:57.233 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:57.233 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:57.233 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:57.233 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:57.233 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:57.233 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:57.233 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:57.233 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:57.233 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:57.233 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:57.233 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:06:57.233 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:06:57.233 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:06:57.233 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:06:57.233 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:06:57.233 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:06:57.233 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:57.233 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:06:57.233 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:06:57.233 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:06:57.233 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:57.233 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:06:57.233 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:57.233 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:06:57.233 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:57.233 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:06:57.233 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:57.233 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:57.233 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:57.233 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:57.233 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:57.233 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:57.233 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:06:57.233 Cannot find device "nvmf_init_br" 00:06:57.233 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:06:57.233 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:06:57.233 Cannot find device "nvmf_init_br2" 00:06:57.233 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:06:57.233 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:06:57.233 Cannot find device "nvmf_tgt_br" 00:06:57.233 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:06:57.233 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:06:57.233 Cannot find device "nvmf_tgt_br2" 00:06:57.233 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:06:57.233 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:06:57.233 Cannot find device "nvmf_init_br" 00:06:57.233 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:06:57.233 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:06:57.233 Cannot find device "nvmf_init_br2" 00:06:57.233 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:06:57.233 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:06:57.233 Cannot find device "nvmf_tgt_br" 00:06:57.233 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:06:57.233 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:06:57.233 Cannot find device "nvmf_tgt_br2" 00:06:57.233 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:06:57.233 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:06:57.233 Cannot find device "nvmf_br" 00:06:57.233 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:06:57.233 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:06:57.233 Cannot find device "nvmf_init_if" 00:06:57.233 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:06:57.233 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:06:57.492 Cannot find device "nvmf_init_if2" 00:06:57.492 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:06:57.492 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:57.492 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:57.492 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:06:57.492 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:57.492 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:57.492 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:06:57.492 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:06:57.492 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:57.492 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:06:57.492 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:57.492 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:57.493 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:57.493 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:57.493 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:57.493 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:06:57.493 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:06:57.493 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:06:57.493 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:06:57.493 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:06:57.493 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:06:57.493 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:06:57.493 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:06:57.493 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:06:57.493 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:57.493 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:57.493 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:57.493 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:06:57.493 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:06:57.493 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:06:57.493 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:06:57.493 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:57.752 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:57.752 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:57.752 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:06:57.752 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:06:57.752 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:06:57.752 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:57.752 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:06:57.752 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:06:57.752 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:57.752 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.101 ms 00:06:57.752 00:06:57.752 --- 10.0.0.3 ping statistics --- 00:06:57.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:57.752 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:06:57.752 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:06:57.752 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:06:57.752 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:06:57.752 00:06:57.752 --- 10.0.0.4 ping statistics --- 00:06:57.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:57.752 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:06:57.752 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:57.752 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:57.752 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:06:57.752 00:06:57.752 --- 10.0.0.1 ping statistics --- 00:06:57.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:57.752 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:06:57.752 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:06:57.752 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:57.752 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 00:06:57.752 00:06:57.752 --- 10.0.0.2 ping statistics --- 00:06:57.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:57.752 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:06:57.752 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:57.752 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:06:57.752 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:57.752 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:57.752 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:57.752 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:57.752 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:57.752 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:57.752 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:57.752 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:57.752 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:57.752 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:57.752 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:57.752 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:57.752 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:57.752 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=62323 00:06:57.752 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:57.752 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 62323 00:06:57.752 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 62323 ']' 00:06:57.752 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.752 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:57.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.752 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.752 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:57.752 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:57.752 [2024-11-15 10:24:58.517822] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:06:57.752 [2024-11-15 10:24:58.517914] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:58.011 [2024-11-15 10:24:58.668190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:58.011 [2024-11-15 10:24:58.737162] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:58.011 [2024-11-15 10:24:58.737477] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:58.011 [2024-11-15 10:24:58.737655] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:58.011 [2024-11-15 10:24:58.737940] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:58.011 [2024-11-15 10:24:58.738157] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:58.011 [2024-11-15 10:24:58.739666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:58.011 [2024-11-15 10:24:58.739940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:58.012 [2024-11-15 10:24:58.739950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.012 [2024-11-15 10:24:58.739776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:58.012 [2024-11-15 10:24:58.800493] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:58.271 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:58.271 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:06:58.271 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:58.271 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:58.271 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:58.271 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:58.271 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:58.271 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.271 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:58.271 [2024-11-15 10:24:58.915985] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:58.271 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.271 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:58.271 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:58.271 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:58.271 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:06:58.271 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:58.271 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:58.271 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.272 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:58.272 Malloc0 00:06:58.272 [2024-11-15 10:24:59.001224] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:06:58.272 10:24:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.272 10:24:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:58.272 10:24:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:58.272 10:24:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:58.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:58.272 10:24:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=62370 00:06:58.272 10:24:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 62370 /var/tmp/bdevperf.sock 00:06:58.272 10:24:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 62370 ']' 00:06:58.272 10:24:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:58.272 10:24:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:58.272 10:24:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:58.272 10:24:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:58.272 10:24:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:58.272 10:24:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:58.272 10:24:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:58.272 10:24:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:58.272 10:24:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:58.272 10:24:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:58.272 10:24:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:58.272 { 00:06:58.272 "params": { 00:06:58.272 "name": "Nvme$subsystem", 00:06:58.272 "trtype": "$TEST_TRANSPORT", 00:06:58.272 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:58.272 "adrfam": "ipv4", 00:06:58.272 "trsvcid": "$NVMF_PORT", 00:06:58.272 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:58.272 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:58.272 "hdgst": ${hdgst:-false}, 00:06:58.272 "ddgst": ${ddgst:-false} 00:06:58.272 }, 00:06:58.272 "method": "bdev_nvme_attach_controller" 00:06:58.272 } 00:06:58.272 EOF 00:06:58.272 )") 00:06:58.272 10:24:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:58.272 10:24:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:58.272 10:24:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:58.272 10:24:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:58.272 "params": { 00:06:58.272 "name": "Nvme0", 00:06:58.272 "trtype": "tcp", 00:06:58.272 "traddr": "10.0.0.3", 00:06:58.272 "adrfam": "ipv4", 00:06:58.272 "trsvcid": "4420", 00:06:58.272 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:58.272 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:58.272 "hdgst": false, 00:06:58.272 "ddgst": false 00:06:58.272 }, 00:06:58.272 "method": "bdev_nvme_attach_controller" 00:06:58.272 }' 00:06:58.272 [2024-11-15 10:24:59.108637] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:06:58.272 [2024-11-15 10:24:59.108921] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62370 ] 00:06:58.531 [2024-11-15 10:24:59.265391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.531 [2024-11-15 10:24:59.325177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.789 [2024-11-15 10:24:59.389991] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:58.789 Running I/O for 10 seconds... 00:06:59.357 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:59.357 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:06:59.357 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:59.357 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.357 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:59.357 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.357 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:59.357 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:59.357 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:59.357 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:59.357 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:59.357 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:59.357 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:59.357 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:59.357 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:59.357 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.357 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:59.357 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:59.357 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.357 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=835 00:06:59.357 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 835 -ge 100 ']' 00:06:59.357 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:59.357 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:59.357 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:59.357 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:59.357 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.357 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:59.357 [2024-11-15 10:25:00.130119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691ef0 is same with the state(6) to be set 00:06:59.357 [2024-11-15 10:25:00.130335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691ef0 is same with the state(6) to be set 00:06:59.357 [2024-11-15 10:25:00.130534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691ef0 is same with the state(6) to be set 00:06:59.357 [2024-11-15 10:25:00.130712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691ef0 is same with the state(6) to be set 00:06:59.357 [2024-11-15 10:25:00.130900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691ef0 is same with the state(6) to be set 00:06:59.357 [2024-11-15 10:25:00.130919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691ef0 is same with the state(6) to be set 00:06:59.357 [2024-11-15 10:25:00.130930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691ef0 is same with the state(6) to be set 00:06:59.357 [2024-11-15 10:25:00.130938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691ef0 is same with the state(6) to be set 00:06:59.357 [2024-11-15 10:25:00.130946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691ef0 is same with the state(6) to be set 00:06:59.357 [2024-11-15 10:25:00.130955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691ef0 is same with the state(6) to be set 00:06:59.357 [2024-11-15 10:25:00.130963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691ef0 is same with the state(6) to be set 00:06:59.357 [2024-11-15 10:25:00.130972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691ef0 is same with the state(6) to be set 00:06:59.357 [2024-11-15 10:25:00.130980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691ef0 is same with the state(6) to be set 00:06:59.357 [2024-11-15 10:25:00.130988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691ef0 is same with the state(6) to be set 00:06:59.357 [2024-11-15 10:25:00.130996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691ef0 is same with the state(6) to be set 00:06:59.357 [2024-11-15 10:25:00.131004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691ef0 is same with the state(6) to be set 00:06:59.357 [2024-11-15 10:25:00.131013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691ef0 is same with the state(6) to be set 00:06:59.357 [2024-11-15 10:25:00.131021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691ef0 is same with the state(6) to be set 00:06:59.357 [2024-11-15 10:25:00.131029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691ef0 is same with the state(6) to be set 00:06:59.357 [2024-11-15 10:25:00.131038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691ef0 is same with the state(6) to be set 00:06:59.357 [2024-11-15 10:25:00.131046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691ef0 is same with the state(6) to be set 00:06:59.357 [2024-11-15 10:25:00.131075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691ef0 is same with the state(6) to be set 00:06:59.358 [2024-11-15 10:25:00.131085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691ef0 is same with the state(6) to be set 00:06:59.358 [2024-11-15 10:25:00.131093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691ef0 is same with the state(6) to be set 00:06:59.358 [2024-11-15 10:25:00.131101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691ef0 is same with the state(6) to be set 00:06:59.358 [2024-11-15 10:25:00.131109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691ef0 is same with the state(6) to be set 00:06:59.358 [2024-11-15 10:25:00.131118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691ef0 is same with the state(6) to be set 00:06:59.358 [2024-11-15 10:25:00.131126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691ef0 is same with the state(6) to be set 00:06:59.358 [2024-11-15 10:25:00.131135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691ef0 is same with the state(6) to be set 00:06:59.358 [2024-11-15 10:25:00.131143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691ef0 is same with the state(6) to be set 00:06:59.358 [2024-11-15 10:25:00.131151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691ef0 is same with the state(6) to be set 00:06:59.358 [2024-11-15 10:25:00.131159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691ef0 is same with the state(6) to be set 00:06:59.358 [2024-11-15 10:25:00.131176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691ef0 is same with the state(6) to be set 00:06:59.358 [2024-11-15 10:25:00.131184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691ef0 is same with the state(6) to be set 00:06:59.358 [2024-11-15 10:25:00.131193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691ef0 is same with the state(6) to be set 00:06:59.358 [2024-11-15 10:25:00.131201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691ef0 is same with the state(6) to be set 00:06:59.358 [2024-11-15 10:25:00.131209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691ef0 is same with the state(6) to be set 00:06:59.358 [2024-11-15 10:25:00.131217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691ef0 is same with the state(6) to be set 00:06:59.358 [2024-11-15 10:25:00.131225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691ef0 is same with the state(6) to be set 00:06:59.358 [2024-11-15 10:25:00.131234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691ef0 is same with the state(6) to be set 00:06:59.358 [2024-11-15 10:25:00.131242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691ef0 is same with the state(6) to be set 00:06:59.358 [2024-11-15 10:25:00.131250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691ef0 is same with the state(6) to be set 00:06:59.358 [2024-11-15 10:25:00.131259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691ef0 is same with the state(6) to be set 00:06:59.358 [2024-11-15 10:25:00.131267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691ef0 is same with the state(6) to be set 00:06:59.358 [2024-11-15 10:25:00.131275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691ef0 is same with the state(6) to be set 00:06:59.358 [2024-11-15 10:25:00.131284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691ef0 is same with the state(6) to be set 00:06:59.358 [2024-11-15 10:25:00.131293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691ef0 is same with the state(6) to be set 00:06:59.358 [2024-11-15 10:25:00.131302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691ef0 is same with the state(6) to be set 00:06:59.358 [2024-11-15 10:25:00.131318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691ef0 is same with the state(6) to be set 00:06:59.358 [2024-11-15 10:25:00.131328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691ef0 is same with the state(6) to be set 00:06:59.358 [2024-11-15 10:25:00.131336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691ef0 is same with the state(6) to be set 00:06:59.358 [2024-11-15 10:25:00.131345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691ef0 is same with the state(6) to be set 00:06:59.358 [2024-11-15 10:25:00.131354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691ef0 is same with the state(6) to be set 00:06:59.358 [2024-11-15 10:25:00.131363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691ef0 is same with the state(6) to be set 00:06:59.358 [2024-11-15 10:25:00.131371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691ef0 is same with the state(6) to be set 00:06:59.358 [2024-11-15 10:25:00.131379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691ef0 is same with the state(6) to be set 00:06:59.358 [2024-11-15 10:25:00.131394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691ef0 is same with the state(6) to be set 00:06:59.358 [2024-11-15 10:25:00.131402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691ef0 is same with the state(6) to be set 00:06:59.358 [2024-11-15 10:25:00.131411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691ef0 is same with the state(6) to be set 00:06:59.358 [2024-11-15 10:25:00.131420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691ef0 is same with the state(6) to be set 00:06:59.358 [2024-11-15 10:25:00.131428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691ef0 is same with the state(6) to be set 00:06:59.358 [2024-11-15 10:25:00.131436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691ef0 is same with the state(6) to be set 00:06:59.358 [2024-11-15 10:25:00.131444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691ef0 is same with the state(6) to be set 00:06:59.358 [2024-11-15 10:25:00.131563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.358 [2024-11-15 10:25:00.131593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.358 [2024-11-15 10:25:00.131615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:114816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.358 [2024-11-15 10:25:00.131626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.358 [2024-11-15 10:25:00.131638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:114944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.358 [2024-11-15 10:25:00.131648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.358 [2024-11-15 10:25:00.131660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:115072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.358 [2024-11-15 10:25:00.131669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.358 [2024-11-15 10:25:00.131680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:115200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.358 [2024-11-15 10:25:00.131689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.358 [2024-11-15 10:25:00.131701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:115328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.358 [2024-11-15 10:25:00.131710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.358 [2024-11-15 10:25:00.131721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:115456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.358 [2024-11-15 10:25:00.131730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.358 [2024-11-15 10:25:00.131741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:115584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.358 [2024-11-15 10:25:00.131751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.358 [2024-11-15 10:25:00.131762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:115712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.358 [2024-11-15 10:25:00.131771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.358 [2024-11-15 10:25:00.131782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:115840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.358 [2024-11-15 10:25:00.131791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.358 [2024-11-15 10:25:00.131803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:115968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.358 [2024-11-15 10:25:00.131824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.358 [2024-11-15 10:25:00.131850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:116096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.358 [2024-11-15 10:25:00.131872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.358 [2024-11-15 10:25:00.131884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:116224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.358 [2024-11-15 10:25:00.131905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.358 [2024-11-15 10:25:00.131918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:116352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.358 [2024-11-15 10:25:00.131927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.358 [2024-11-15 10:25:00.131938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:116480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.358 [2024-11-15 10:25:00.131948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.358 [2024-11-15 10:25:00.131959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:116608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.358 [2024-11-15 10:25:00.131968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.358 [2024-11-15 10:25:00.131979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:116736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.358 [2024-11-15 10:25:00.131989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.358 [2024-11-15 10:25:00.132000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:116864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.358 [2024-11-15 10:25:00.132010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.358 [2024-11-15 10:25:00.132021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:116992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.358 [2024-11-15 10:25:00.132030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.358 [2024-11-15 10:25:00.132041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:117120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.358 [2024-11-15 10:25:00.132063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.358 [2024-11-15 10:25:00.132087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:117248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.358 [2024-11-15 10:25:00.132097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.358 [2024-11-15 10:25:00.132108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:117376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.358 [2024-11-15 10:25:00.132118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.358 [2024-11-15 10:25:00.132129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:117504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.358 [2024-11-15 10:25:00.132145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.358 [2024-11-15 10:25:00.132157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:117632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.358 [2024-11-15 10:25:00.132167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.358 [2024-11-15 10:25:00.132178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:117760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.358 [2024-11-15 10:25:00.132195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.358 [2024-11-15 10:25:00.132207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:117888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.358 [2024-11-15 10:25:00.132216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.358 [2024-11-15 10:25:00.132227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:118016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.358 [2024-11-15 10:25:00.132236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.358 [2024-11-15 10:25:00.132248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:118144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.358 [2024-11-15 10:25:00.132257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.358 [2024-11-15 10:25:00.132268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:118272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.358 [2024-11-15 10:25:00.132289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.358 [2024-11-15 10:25:00.132301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:118400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.358 [2024-11-15 10:25:00.132311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.358 [2024-11-15 10:25:00.132322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:118528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.358 [2024-11-15 10:25:00.132331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.358 [2024-11-15 10:25:00.132342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:118656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.358 [2024-11-15 10:25:00.132352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.358 [2024-11-15 10:25:00.132363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:118784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.358 [2024-11-15 10:25:00.132372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.358 [2024-11-15 10:25:00.132383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:118912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.358 [2024-11-15 10:25:00.132393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.358 [2024-11-15 10:25:00.132404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:119040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.358 [2024-11-15 10:25:00.132413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.358 [2024-11-15 10:25:00.132425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:119168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.358 [2024-11-15 10:25:00.132434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.358 [2024-11-15 10:25:00.132445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:119296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.358 [2024-11-15 10:25:00.132454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.358 [2024-11-15 10:25:00.132465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:119424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.358 [2024-11-15 10:25:00.132474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.358 [2024-11-15 10:25:00.132485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:119552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.358 [2024-11-15 10:25:00.132495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.359 [2024-11-15 10:25:00.132506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:119680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.359 [2024-11-15 10:25:00.132521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.359 [2024-11-15 10:25:00.132532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:119808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.359 [2024-11-15 10:25:00.132542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.359 [2024-11-15 10:25:00.132558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:119936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.359 [2024-11-15 10:25:00.132569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.359 [2024-11-15 10:25:00.132580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:120064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.359 [2024-11-15 10:25:00.132590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.359 [2024-11-15 10:25:00.132601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:120192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.359 [2024-11-15 10:25:00.132610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.359 [2024-11-15 10:25:00.132621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:120320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.359 [2024-11-15 10:25:00.132640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.359 [2024-11-15 10:25:00.132653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:120448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.359 [2024-11-15 10:25:00.132662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.359 [2024-11-15 10:25:00.132674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:120576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.359 [2024-11-15 10:25:00.132698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.359 [2024-11-15 10:25:00.132709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:120704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.359 [2024-11-15 10:25:00.132718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.359 [2024-11-15 10:25:00.132729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:120832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.359 [2024-11-15 10:25:00.132738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.359 [2024-11-15 10:25:00.132749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:120960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.359 [2024-11-15 10:25:00.132758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.359 [2024-11-15 10:25:00.132769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:121088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.359 [2024-11-15 10:25:00.132778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.359 [2024-11-15 10:25:00.132789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:121216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.359 [2024-11-15 10:25:00.132798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.359 [2024-11-15 10:25:00.132809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:121344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.359 [2024-11-15 10:25:00.132818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.359 [2024-11-15 10:25:00.132829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:121472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.359 [2024-11-15 10:25:00.132838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.359 [2024-11-15 10:25:00.132848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:121600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.359 [2024-11-15 10:25:00.132857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.359 [2024-11-15 10:25:00.132868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:121728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.359 [2024-11-15 10:25:00.132877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.359 [2024-11-15 10:25:00.132888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:121856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.359 [2024-11-15 10:25:00.132897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.359 [2024-11-15 10:25:00.132909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:121984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.359 [2024-11-15 10:25:00.132918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.359 [2024-11-15 10:25:00.132929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:122112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.359 [2024-11-15 10:25:00.132938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.359 [2024-11-15 10:25:00.132949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:122240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.359 [2024-11-15 10:25:00.132958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.359 [2024-11-15 10:25:00.132969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:122368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.359 [2024-11-15 10:25:00.132983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.359 [2024-11-15 10:25:00.132994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:122496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.359 [2024-11-15 10:25:00.133004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.359 [2024-11-15 10:25:00.133015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:122624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.359 [2024-11-15 10:25:00.133024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.359 [2024-11-15 10:25:00.133034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:122752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.359 [2024-11-15 10:25:00.133044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.359 [2024-11-15 10:25:00.133054] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cfc00 is same with the state(6) to be set 00:06:59.359 [2024-11-15 10:25:00.133252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:59.359 [2024-11-15 10:25:00.133277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.359 [2024-11-15 10:25:00.133297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:06:59.359 [2024-11-15 10:25:00.133311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.359 [2024-11-15 10:25:00.133322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:06:59.359 [2024-11-15 10:25:00.133330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.359 [2024-11-15 10:25:00.133342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:06:59.359 [2024-11-15 10:25:00.133357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.359 [2024-11-15 10:25:00.133372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d0ce0 is same with the state(6) to be set 00:06:59.359 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.359 [2024-11-15 10:25:00.134698] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting contro 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:59.359 ller 00:06:59.359 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.359 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:59.359 task offset: 114688 on job bdev=Nvme0n1 fails 00:06:59.359 00:06:59.359 Latency(us) 00:06:59.359 [2024-11-15T10:25:00.212Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:59.359 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:59.359 Job: Nvme0n1 ended in about 0.62 seconds with error 00:06:59.359 Verification LBA range: start 0x0 length 0x400 00:06:59.359 Nvme0n1 : 0.62 1453.27 90.83 103.80 0.00 39617.95 5123.72 48615.80 00:06:59.359 [2024-11-15T10:25:00.212Z] =================================================================================================================== 00:06:59.359 [2024-11-15T10:25:00.212Z] Total : 1453.27 90.83 103.80 0.00 39617.95 5123.72 48615.80 00:06:59.359 [2024-11-15 10:25:00.137031] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:59.359 [2024-11-15 10:25:00.137197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d0ce0 (9): Bad file descriptor 00:06:59.359 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.359 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:59.359 [2024-11-15 10:25:00.143826] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:07:00.294 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 62370 00:07:00.552 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (62370) - No such process 00:07:00.552 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:00.552 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:00.552 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:00.552 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:00.552 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:00.552 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:00.552 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:00.552 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:00.552 { 00:07:00.552 "params": { 00:07:00.552 "name": "Nvme$subsystem", 00:07:00.552 "trtype": "$TEST_TRANSPORT", 00:07:00.552 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:00.552 "adrfam": "ipv4", 00:07:00.552 "trsvcid": "$NVMF_PORT", 00:07:00.552 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:00.552 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:00.552 "hdgst": ${hdgst:-false}, 00:07:00.552 "ddgst": ${ddgst:-false} 00:07:00.552 }, 00:07:00.552 "method": "bdev_nvme_attach_controller" 00:07:00.552 } 00:07:00.552 EOF 00:07:00.552 )") 00:07:00.552 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:00.552 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:00.552 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:00.552 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:00.552 "params": { 00:07:00.552 "name": "Nvme0", 00:07:00.552 "trtype": "tcp", 00:07:00.552 "traddr": "10.0.0.3", 00:07:00.552 "adrfam": "ipv4", 00:07:00.552 "trsvcid": "4420", 00:07:00.552 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:00.552 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:00.552 "hdgst": false, 00:07:00.552 "ddgst": false 00:07:00.552 }, 00:07:00.553 "method": "bdev_nvme_attach_controller" 00:07:00.553 }' 00:07:00.553 [2024-11-15 10:25:01.202824] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:07:00.553 [2024-11-15 10:25:01.202918] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62408 ] 00:07:00.553 [2024-11-15 10:25:01.355823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.811 [2024-11-15 10:25:01.425492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.811 [2024-11-15 10:25:01.493671] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:00.811 Running I/O for 1 seconds... 00:07:02.188 1536.00 IOPS, 96.00 MiB/s 00:07:02.188 Latency(us) 00:07:02.188 [2024-11-15T10:25:03.041Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:02.188 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:02.188 Verification LBA range: start 0x0 length 0x400 00:07:02.188 Nvme0n1 : 1.03 1554.39 97.15 0.00 0.00 40372.49 4110.89 38368.35 00:07:02.188 [2024-11-15T10:25:03.041Z] =================================================================================================================== 00:07:02.188 [2024-11-15T10:25:03.041Z] Total : 1554.39 97.15 0.00 0.00 40372.49 4110.89 38368.35 00:07:02.188 10:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:02.188 10:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:02.188 10:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:07:02.188 10:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:02.188 10:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:02.189 10:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:02.189 10:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:02.189 10:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:02.189 10:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:02.189 10:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:02.189 10:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:02.189 rmmod nvme_tcp 00:07:02.189 rmmod nvme_fabrics 00:07:02.189 rmmod nvme_keyring 00:07:02.189 10:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:02.189 10:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:02.189 10:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:02.189 10:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 62323 ']' 00:07:02.189 10:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 62323 00:07:02.189 10:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 62323 ']' 00:07:02.189 10:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 62323 00:07:02.189 10:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:07:02.189 10:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:02.189 10:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62323 00:07:02.189 killing process with pid 62323 00:07:02.189 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:07:02.189 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:07:02.189 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62323' 00:07:02.189 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 62323 00:07:02.189 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 62323 00:07:02.448 [2024-11-15 10:25:03.209681] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:02.448 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:02.448 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:02.448 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:02.448 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:02.448 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:02.448 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:02.448 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:02.448 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:02.448 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:02.448 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:02.448 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:02.448 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:02.448 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:02.707 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:02.707 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:02.707 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:02.707 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:02.707 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:02.707 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:02.707 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:02.707 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:02.707 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:02.707 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:02.707 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:02.707 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:02.707 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:02.707 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:07:02.707 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:02.707 00:07:02.707 real 0m5.747s 00:07:02.707 user 0m20.489s 00:07:02.707 sys 0m1.638s 00:07:02.707 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:02.707 ************************************ 00:07:02.707 END TEST nvmf_host_management 00:07:02.707 ************************************ 00:07:02.707 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:02.707 10:25:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:02.707 10:25:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:02.707 10:25:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:02.707 10:25:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:02.707 ************************************ 00:07:02.707 START TEST nvmf_lvol 00:07:02.707 ************************************ 00:07:02.707 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:02.966 * Looking for test storage... 00:07:02.966 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:02.966 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:02.966 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:07:02.966 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:02.966 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:02.966 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:02.966 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:02.966 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:02.966 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:02.966 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:02.966 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:02.966 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:02.966 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:02.966 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:02.966 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:02.966 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:02.966 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:02.966 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:02.966 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:02.966 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:02.966 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:02.966 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:02.966 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:02.966 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:02.966 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:02.966 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:02.966 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:02.966 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:02.966 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:02.966 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:02.966 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:02.966 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:02.966 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:02.966 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:02.966 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:02.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.966 --rc genhtml_branch_coverage=1 00:07:02.966 --rc genhtml_function_coverage=1 00:07:02.966 --rc genhtml_legend=1 00:07:02.966 --rc geninfo_all_blocks=1 00:07:02.966 --rc geninfo_unexecuted_blocks=1 00:07:02.966 00:07:02.966 ' 00:07:02.966 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:02.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.966 --rc genhtml_branch_coverage=1 00:07:02.966 --rc genhtml_function_coverage=1 00:07:02.966 --rc genhtml_legend=1 00:07:02.966 --rc geninfo_all_blocks=1 00:07:02.966 --rc geninfo_unexecuted_blocks=1 00:07:02.966 00:07:02.966 ' 00:07:02.966 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:02.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.966 --rc genhtml_branch_coverage=1 00:07:02.966 --rc genhtml_function_coverage=1 00:07:02.966 --rc genhtml_legend=1 00:07:02.966 --rc geninfo_all_blocks=1 00:07:02.966 --rc geninfo_unexecuted_blocks=1 00:07:02.966 00:07:02.966 ' 00:07:02.966 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:02.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.966 --rc genhtml_branch_coverage=1 00:07:02.966 --rc genhtml_function_coverage=1 00:07:02.966 --rc genhtml_legend=1 00:07:02.966 --rc geninfo_all_blocks=1 00:07:02.966 --rc geninfo_unexecuted_blocks=1 00:07:02.966 00:07:02.966 ' 00:07:02.966 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:02.966 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:02.966 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:02.966 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:02.966 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:02.966 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:02.966 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:02.966 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:02.966 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:02.966 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:02.966 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:02.966 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:02.966 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:07:02.966 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:07:02.966 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:02.966 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:02.966 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:02.967 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:02.967 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:02.967 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:02.967 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:02.967 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:02.967 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:02.967 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.967 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.967 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.967 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:02.967 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.967 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:02.967 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:02.967 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:02.967 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:02.967 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:02.967 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:02.967 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:02.967 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:02.967 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:02.967 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:02.967 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:02.967 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:02.967 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:02.967 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:02.967 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:02.967 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:02.967 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:02.967 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:02.967 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:02.967 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:02.967 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:02.967 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:02.967 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:02.967 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:02.967 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:02.967 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:02.967 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:02.967 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:02.967 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:02.967 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:02.967 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:02.967 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:02.967 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:02.967 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:02.967 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:02.967 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:02.967 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:02.967 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:02.967 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:02.967 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:02.967 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:02.967 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:02.967 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:02.967 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:02.967 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:02.967 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:02.967 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:02.967 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:02.967 Cannot find device "nvmf_init_br" 00:07:02.967 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:07:02.967 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:02.967 Cannot find device "nvmf_init_br2" 00:07:02.967 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:07:02.967 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:02.967 Cannot find device "nvmf_tgt_br" 00:07:02.967 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:07:02.967 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:03.228 Cannot find device "nvmf_tgt_br2" 00:07:03.228 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:07:03.228 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:03.228 Cannot find device "nvmf_init_br" 00:07:03.228 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:07:03.228 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:03.228 Cannot find device "nvmf_init_br2" 00:07:03.228 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:07:03.228 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:03.228 Cannot find device "nvmf_tgt_br" 00:07:03.228 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:07:03.228 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:03.228 Cannot find device "nvmf_tgt_br2" 00:07:03.228 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:07:03.228 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:03.228 Cannot find device "nvmf_br" 00:07:03.228 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:07:03.228 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:03.228 Cannot find device "nvmf_init_if" 00:07:03.228 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:07:03.228 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:03.228 Cannot find device "nvmf_init_if2" 00:07:03.228 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:07:03.228 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:03.228 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:03.228 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:07:03.228 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:03.228 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:03.228 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:07:03.228 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:03.228 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:03.228 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:03.228 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:03.228 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:03.228 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:03.228 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:03.228 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:03.228 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:03.228 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:03.228 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:03.228 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:03.228 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:03.228 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:03.228 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:03.228 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:03.228 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:03.228 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:03.228 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:03.228 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:03.228 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:03.228 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:03.228 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:03.228 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:03.228 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:03.228 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:03.495 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:03.495 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:03.495 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:03.495 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:03.495 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:03.495 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:03.495 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:03.495 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:03.495 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:07:03.495 00:07:03.495 --- 10.0.0.3 ping statistics --- 00:07:03.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:03.495 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:07:03.495 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:03.495 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:03.495 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:07:03.495 00:07:03.495 --- 10.0.0.4 ping statistics --- 00:07:03.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:03.495 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:07:03.495 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:03.495 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:03.495 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:07:03.495 00:07:03.495 --- 10.0.0.1 ping statistics --- 00:07:03.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:03.495 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:07:03.495 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:03.495 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:03.495 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:07:03.495 00:07:03.495 --- 10.0.0.2 ping statistics --- 00:07:03.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:03.495 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:07:03.495 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:03.495 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:07:03.495 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:03.495 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:03.495 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:03.495 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:03.495 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:03.495 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:03.495 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:03.495 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:03.495 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:03.495 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:03.495 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:03.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.495 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=62682 00:07:03.495 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 62682 00:07:03.495 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:03.495 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 62682 ']' 00:07:03.495 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.495 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:03.495 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.495 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:03.495 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:03.495 [2024-11-15 10:25:04.206155] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:07:03.495 [2024-11-15 10:25:04.206440] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:03.754 [2024-11-15 10:25:04.363717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:03.754 [2024-11-15 10:25:04.426881] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:03.754 [2024-11-15 10:25:04.427216] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:03.754 [2024-11-15 10:25:04.427465] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:03.754 [2024-11-15 10:25:04.427613] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:03.754 [2024-11-15 10:25:04.427736] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:03.754 [2024-11-15 10:25:04.429042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:03.754 [2024-11-15 10:25:04.429109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:03.754 [2024-11-15 10:25:04.429117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.754 [2024-11-15 10:25:04.489741] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:03.754 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:03.754 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:07:03.754 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:03.754 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:03.754 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:04.014 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:04.014 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:04.273 [2024-11-15 10:25:04.899932] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:04.273 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:04.531 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:04.531 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:04.791 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:04.791 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:05.050 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:05.309 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=839948c6-4280-4397-868c-083c88feb427 00:07:05.309 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 839948c6-4280-4397-868c-083c88feb427 lvol 20 00:07:05.568 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=ff1006f3-9dbc-4e5e-8fe7-bb32d70f910a 00:07:05.568 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:05.828 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ff1006f3-9dbc-4e5e-8fe7-bb32d70f910a 00:07:06.087 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:07:06.346 [2024-11-15 10:25:07.135458] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:06.346 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:06.605 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=62750 00:07:06.605 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:06.605 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:07.984 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot ff1006f3-9dbc-4e5e-8fe7-bb32d70f910a MY_SNAPSHOT 00:07:07.984 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=fcc6533a-4f7a-4b77-ac50-5c4479a117c0 00:07:07.984 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize ff1006f3-9dbc-4e5e-8fe7-bb32d70f910a 30 00:07:08.242 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone fcc6533a-4f7a-4b77-ac50-5c4479a117c0 MY_CLONE 00:07:08.810 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=601b4b12-923b-40cd-8a3f-d7e3da8b2e16 00:07:08.810 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 601b4b12-923b-40cd-8a3f-d7e3da8b2e16 00:07:09.069 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 62750 00:07:17.217 Initializing NVMe Controllers 00:07:17.217 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:07:17.217 Controller IO queue size 128, less than required. 00:07:17.217 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:17.217 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:17.217 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:17.217 Initialization complete. Launching workers. 00:07:17.217 ======================================================== 00:07:17.217 Latency(us) 00:07:17.217 Device Information : IOPS MiB/s Average min max 00:07:17.217 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10434.30 40.76 12271.48 2759.01 98472.74 00:07:17.217 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10295.80 40.22 12440.10 2778.29 68947.87 00:07:17.217 ======================================================== 00:07:17.217 Total : 20730.09 80.98 12355.23 2759.01 98472.74 00:07:17.217 00:07:17.217 10:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:17.217 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete ff1006f3-9dbc-4e5e-8fe7-bb32d70f910a 00:07:17.476 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 839948c6-4280-4397-868c-083c88feb427 00:07:17.735 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:17.735 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:17.735 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:17.735 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:17.735 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:17.994 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:17.994 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:17.994 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:17.994 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:17.994 rmmod nvme_tcp 00:07:17.994 rmmod nvme_fabrics 00:07:17.994 rmmod nvme_keyring 00:07:17.994 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:17.994 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:17.994 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:17.994 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 62682 ']' 00:07:17.994 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 62682 00:07:17.994 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 62682 ']' 00:07:17.994 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 62682 00:07:17.994 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:07:17.994 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:17.994 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62682 00:07:17.994 killing process with pid 62682 00:07:17.994 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:17.994 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:17.994 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62682' 00:07:17.994 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 62682 00:07:17.994 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 62682 00:07:18.254 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:18.254 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:18.254 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:18.254 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:18.254 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:07:18.254 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:18.254 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:07:18.254 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:18.254 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:18.254 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:18.254 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:18.254 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:18.254 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:18.254 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:18.254 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:18.254 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:18.254 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:18.254 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:18.514 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:18.514 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:18.514 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:18.514 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:18.514 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:18.514 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:18.514 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:18.514 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:18.514 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:07:18.514 00:07:18.514 real 0m15.673s 00:07:18.514 user 1m4.583s 00:07:18.514 sys 0m4.320s 00:07:18.514 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:18.514 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:18.514 ************************************ 00:07:18.514 END TEST nvmf_lvol 00:07:18.514 ************************************ 00:07:18.514 10:25:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:18.514 10:25:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:18.514 10:25:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:18.514 10:25:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:18.514 ************************************ 00:07:18.514 START TEST nvmf_lvs_grow 00:07:18.514 ************************************ 00:07:18.514 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:18.514 * Looking for test storage... 00:07:18.514 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:18.514 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:18.514 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:07:18.514 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:18.774 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:18.774 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:18.774 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:18.774 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:18.774 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:18.774 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:18.774 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:18.774 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:18.774 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:18.774 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:18.774 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:18.774 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:18.774 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:18.774 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:18.774 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:18.774 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:18.774 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:18.774 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:18.774 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:18.774 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:18.774 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:18.774 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:18.774 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:18.774 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:18.774 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:18.774 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:18.774 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:18.774 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:18.774 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:18.774 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:18.774 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:18.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.775 --rc genhtml_branch_coverage=1 00:07:18.775 --rc genhtml_function_coverage=1 00:07:18.775 --rc genhtml_legend=1 00:07:18.775 --rc geninfo_all_blocks=1 00:07:18.775 --rc geninfo_unexecuted_blocks=1 00:07:18.775 00:07:18.775 ' 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:18.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.775 --rc genhtml_branch_coverage=1 00:07:18.775 --rc genhtml_function_coverage=1 00:07:18.775 --rc genhtml_legend=1 00:07:18.775 --rc geninfo_all_blocks=1 00:07:18.775 --rc geninfo_unexecuted_blocks=1 00:07:18.775 00:07:18.775 ' 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:18.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.775 --rc genhtml_branch_coverage=1 00:07:18.775 --rc genhtml_function_coverage=1 00:07:18.775 --rc genhtml_legend=1 00:07:18.775 --rc geninfo_all_blocks=1 00:07:18.775 --rc geninfo_unexecuted_blocks=1 00:07:18.775 00:07:18.775 ' 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:18.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.775 --rc genhtml_branch_coverage=1 00:07:18.775 --rc genhtml_function_coverage=1 00:07:18.775 --rc genhtml_legend=1 00:07:18.775 --rc geninfo_all_blocks=1 00:07:18.775 --rc geninfo_unexecuted_blocks=1 00:07:18.775 00:07:18.775 ' 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:18.775 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:18.775 Cannot find device "nvmf_init_br" 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:18.775 Cannot find device "nvmf_init_br2" 00:07:18.775 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:07:18.776 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:18.776 Cannot find device "nvmf_tgt_br" 00:07:18.776 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:07:18.776 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:18.776 Cannot find device "nvmf_tgt_br2" 00:07:18.776 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:07:18.776 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:18.776 Cannot find device "nvmf_init_br" 00:07:18.776 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:07:18.776 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:18.776 Cannot find device "nvmf_init_br2" 00:07:18.776 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:07:18.776 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:19.037 Cannot find device "nvmf_tgt_br" 00:07:19.037 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:07:19.037 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:19.037 Cannot find device "nvmf_tgt_br2" 00:07:19.037 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:07:19.037 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:19.037 Cannot find device "nvmf_br" 00:07:19.037 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:07:19.037 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:19.037 Cannot find device "nvmf_init_if" 00:07:19.037 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:07:19.037 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:19.037 Cannot find device "nvmf_init_if2" 00:07:19.037 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:07:19.037 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:19.037 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:19.037 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:07:19.037 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:19.037 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:19.037 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:07:19.037 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:19.037 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:19.037 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:19.037 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:19.037 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:19.037 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:19.037 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:19.037 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:19.037 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:19.037 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:19.037 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:19.037 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:19.037 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:19.037 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:19.037 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:19.037 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:19.037 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:19.037 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:19.037 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:19.297 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:19.297 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:19.297 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:19.297 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:19.297 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:19.297 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:19.297 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:19.297 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:19.297 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:19.297 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:19.297 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:19.297 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:19.297 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:19.297 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:19.297 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:19.297 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:07:19.297 00:07:19.297 --- 10.0.0.3 ping statistics --- 00:07:19.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:19.297 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:07:19.297 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:19.297 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:19.297 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.072 ms 00:07:19.297 00:07:19.297 --- 10.0.0.4 ping statistics --- 00:07:19.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:19.297 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:07:19.297 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:19.297 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:19.297 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:07:19.297 00:07:19.297 --- 10.0.0.1 ping statistics --- 00:07:19.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:19.297 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:07:19.297 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:19.297 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:19.297 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:07:19.297 00:07:19.297 --- 10.0.0.2 ping statistics --- 00:07:19.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:19.297 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:07:19.297 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:19.297 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:07:19.297 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:19.297 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:19.297 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:19.297 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:19.297 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:19.297 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:19.297 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:19.297 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:19.297 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:19.297 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:19.297 10:25:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:19.297 10:25:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=63128 00:07:19.297 10:25:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:19.297 10:25:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 63128 00:07:19.297 10:25:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 63128 ']' 00:07:19.297 10:25:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.297 10:25:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:19.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.297 10:25:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.297 10:25:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:19.297 10:25:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:19.297 [2024-11-15 10:25:20.072548] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:07:19.297 [2024-11-15 10:25:20.072680] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:19.557 [2024-11-15 10:25:20.228597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.557 [2024-11-15 10:25:20.296261] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:19.557 [2024-11-15 10:25:20.296333] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:19.557 [2024-11-15 10:25:20.296347] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:19.557 [2024-11-15 10:25:20.296369] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:19.557 [2024-11-15 10:25:20.296378] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:19.557 [2024-11-15 10:25:20.296916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.557 [2024-11-15 10:25:20.359489] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:20.493 10:25:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:20.493 10:25:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:07:20.493 10:25:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:20.493 10:25:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:20.493 10:25:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:20.493 10:25:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:20.493 10:25:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:20.493 [2024-11-15 10:25:21.337186] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:20.752 10:25:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:20.752 10:25:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:20.752 10:25:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:20.752 10:25:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:20.752 ************************************ 00:07:20.752 START TEST lvs_grow_clean 00:07:20.752 ************************************ 00:07:20.752 10:25:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:07:20.752 10:25:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:20.752 10:25:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:20.752 10:25:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:20.752 10:25:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:20.752 10:25:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:20.752 10:25:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:20.752 10:25:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:20.753 10:25:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:20.753 10:25:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:21.012 10:25:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:21.012 10:25:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:21.271 10:25:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=e0bfe4d2-b6ed-4b61-866b-8dd76c8976ae 00:07:21.271 10:25:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e0bfe4d2-b6ed-4b61-866b-8dd76c8976ae 00:07:21.271 10:25:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:21.528 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:21.528 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:21.528 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e0bfe4d2-b6ed-4b61-866b-8dd76c8976ae lvol 150 00:07:21.786 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=49d879e1-d414-4869-8256-5b2e1a1c80df 00:07:21.786 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:21.786 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:22.045 [2024-11-15 10:25:22.748001] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:22.045 [2024-11-15 10:25:22.748108] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:22.045 true 00:07:22.045 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e0bfe4d2-b6ed-4b61-866b-8dd76c8976ae 00:07:22.045 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:22.311 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:22.311 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:22.584 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 49d879e1-d414-4869-8256-5b2e1a1c80df 00:07:22.843 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:07:23.102 [2024-11-15 10:25:23.808735] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:23.102 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:23.361 10:25:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:23.361 10:25:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63216 00:07:23.361 10:25:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:23.361 10:25:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63216 /var/tmp/bdevperf.sock 00:07:23.361 10:25:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 63216 ']' 00:07:23.361 10:25:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:23.361 10:25:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:23.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:23.361 10:25:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:23.361 10:25:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:23.361 10:25:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:23.361 [2024-11-15 10:25:24.112912] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:07:23.361 [2024-11-15 10:25:24.113000] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63216 ] 00:07:23.620 [2024-11-15 10:25:24.267299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.620 [2024-11-15 10:25:24.342431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:23.620 [2024-11-15 10:25:24.402410] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:24.557 10:25:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:24.557 10:25:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:07:24.557 10:25:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:24.557 Nvme0n1 00:07:24.816 10:25:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:25.075 [ 00:07:25.075 { 00:07:25.075 "name": "Nvme0n1", 00:07:25.075 "aliases": [ 00:07:25.075 "49d879e1-d414-4869-8256-5b2e1a1c80df" 00:07:25.075 ], 00:07:25.075 "product_name": "NVMe disk", 00:07:25.075 "block_size": 4096, 00:07:25.075 "num_blocks": 38912, 00:07:25.075 "uuid": "49d879e1-d414-4869-8256-5b2e1a1c80df", 00:07:25.075 "numa_id": -1, 00:07:25.075 "assigned_rate_limits": { 00:07:25.075 "rw_ios_per_sec": 0, 00:07:25.075 "rw_mbytes_per_sec": 0, 00:07:25.075 "r_mbytes_per_sec": 0, 00:07:25.075 "w_mbytes_per_sec": 0 00:07:25.075 }, 00:07:25.075 "claimed": false, 00:07:25.075 "zoned": false, 00:07:25.075 "supported_io_types": { 00:07:25.075 "read": true, 00:07:25.075 "write": true, 00:07:25.075 "unmap": true, 00:07:25.075 "flush": true, 00:07:25.075 "reset": true, 00:07:25.075 "nvme_admin": true, 00:07:25.075 "nvme_io": true, 00:07:25.075 "nvme_io_md": false, 00:07:25.075 "write_zeroes": true, 00:07:25.075 "zcopy": false, 00:07:25.075 "get_zone_info": false, 00:07:25.075 "zone_management": false, 00:07:25.075 "zone_append": false, 00:07:25.075 "compare": true, 00:07:25.075 "compare_and_write": true, 00:07:25.075 "abort": true, 00:07:25.075 "seek_hole": false, 00:07:25.075 "seek_data": false, 00:07:25.075 "copy": true, 00:07:25.075 "nvme_iov_md": false 00:07:25.075 }, 00:07:25.075 "memory_domains": [ 00:07:25.075 { 00:07:25.075 "dma_device_id": "system", 00:07:25.075 "dma_device_type": 1 00:07:25.075 } 00:07:25.075 ], 00:07:25.075 "driver_specific": { 00:07:25.075 "nvme": [ 00:07:25.075 { 00:07:25.075 "trid": { 00:07:25.075 "trtype": "TCP", 00:07:25.075 "adrfam": "IPv4", 00:07:25.075 "traddr": "10.0.0.3", 00:07:25.075 "trsvcid": "4420", 00:07:25.075 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:25.075 }, 00:07:25.075 "ctrlr_data": { 00:07:25.075 "cntlid": 1, 00:07:25.075 "vendor_id": "0x8086", 00:07:25.075 "model_number": "SPDK bdev Controller", 00:07:25.075 "serial_number": "SPDK0", 00:07:25.075 "firmware_revision": "25.01", 00:07:25.075 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:25.075 "oacs": { 00:07:25.075 "security": 0, 00:07:25.075 "format": 0, 00:07:25.075 "firmware": 0, 00:07:25.075 "ns_manage": 0 00:07:25.075 }, 00:07:25.075 "multi_ctrlr": true, 00:07:25.075 "ana_reporting": false 00:07:25.075 }, 00:07:25.075 "vs": { 00:07:25.075 "nvme_version": "1.3" 00:07:25.075 }, 00:07:25.075 "ns_data": { 00:07:25.075 "id": 1, 00:07:25.075 "can_share": true 00:07:25.075 } 00:07:25.075 } 00:07:25.075 ], 00:07:25.075 "mp_policy": "active_passive" 00:07:25.075 } 00:07:25.075 } 00:07:25.075 ] 00:07:25.075 10:25:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63244 00:07:25.075 10:25:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:25.075 10:25:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:25.075 Running I/O for 10 seconds... 00:07:26.014 Latency(us) 00:07:26.014 [2024-11-15T10:25:26.867Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:26.014 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:26.014 Nvme0n1 : 1.00 6341.00 24.77 0.00 0.00 0.00 0.00 0.00 00:07:26.014 [2024-11-15T10:25:26.867Z] =================================================================================================================== 00:07:26.014 [2024-11-15T10:25:26.867Z] Total : 6341.00 24.77 0.00 0.00 0.00 0.00 0.00 00:07:26.014 00:07:26.949 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e0bfe4d2-b6ed-4b61-866b-8dd76c8976ae 00:07:27.207 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:27.207 Nvme0n1 : 2.00 6409.00 25.04 0.00 0.00 0.00 0.00 0.00 00:07:27.207 [2024-11-15T10:25:28.060Z] =================================================================================================================== 00:07:27.207 [2024-11-15T10:25:28.060Z] Total : 6409.00 25.04 0.00 0.00 0.00 0.00 0.00 00:07:27.207 00:07:27.207 true 00:07:27.208 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e0bfe4d2-b6ed-4b61-866b-8dd76c8976ae 00:07:27.208 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:27.776 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:27.776 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:27.776 10:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 63244 00:07:28.034 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:28.034 Nvme0n1 : 3.00 6516.33 25.45 0.00 0.00 0.00 0.00 0.00 00:07:28.034 [2024-11-15T10:25:28.887Z] =================================================================================================================== 00:07:28.034 [2024-11-15T10:25:28.887Z] Total : 6516.33 25.45 0.00 0.00 0.00 0.00 0.00 00:07:28.034 00:07:28.976 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:28.976 Nvme0n1 : 4.00 6440.50 25.16 0.00 0.00 0.00 0.00 0.00 00:07:28.976 [2024-11-15T10:25:29.829Z] =================================================================================================================== 00:07:28.976 [2024-11-15T10:25:29.829Z] Total : 6440.50 25.16 0.00 0.00 0.00 0.00 0.00 00:07:28.976 00:07:30.361 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:30.361 Nvme0n1 : 5.00 6450.60 25.20 0.00 0.00 0.00 0.00 0.00 00:07:30.361 [2024-11-15T10:25:31.214Z] =================================================================================================================== 00:07:30.361 [2024-11-15T10:25:31.214Z] Total : 6450.60 25.20 0.00 0.00 0.00 0.00 0.00 00:07:30.361 00:07:31.296 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:31.296 Nvme0n1 : 6.00 6497.33 25.38 0.00 0.00 0.00 0.00 0.00 00:07:31.296 [2024-11-15T10:25:32.150Z] =================================================================================================================== 00:07:31.297 [2024-11-15T10:25:32.150Z] Total : 6497.33 25.38 0.00 0.00 0.00 0.00 0.00 00:07:31.297 00:07:32.234 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:32.234 Nvme0n1 : 7.00 6494.43 25.37 0.00 0.00 0.00 0.00 0.00 00:07:32.234 [2024-11-15T10:25:33.087Z] =================================================================================================================== 00:07:32.234 [2024-11-15T10:25:33.087Z] Total : 6494.43 25.37 0.00 0.00 0.00 0.00 0.00 00:07:32.234 00:07:33.168 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:33.169 Nvme0n1 : 8.00 6476.38 25.30 0.00 0.00 0.00 0.00 0.00 00:07:33.169 [2024-11-15T10:25:34.022Z] =================================================================================================================== 00:07:33.169 [2024-11-15T10:25:34.022Z] Total : 6476.38 25.30 0.00 0.00 0.00 0.00 0.00 00:07:33.169 00:07:34.104 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:34.104 Nvme0n1 : 9.00 6448.22 25.19 0.00 0.00 0.00 0.00 0.00 00:07:34.104 [2024-11-15T10:25:34.957Z] =================================================================================================================== 00:07:34.104 [2024-11-15T10:25:34.957Z] Total : 6448.22 25.19 0.00 0.00 0.00 0.00 0.00 00:07:34.104 00:07:35.041 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:35.041 Nvme0n1 : 10.00 6425.70 25.10 0.00 0.00 0.00 0.00 0.00 00:07:35.041 [2024-11-15T10:25:35.894Z] =================================================================================================================== 00:07:35.041 [2024-11-15T10:25:35.894Z] Total : 6425.70 25.10 0.00 0.00 0.00 0.00 0.00 00:07:35.041 00:07:35.041 00:07:35.041 Latency(us) 00:07:35.041 [2024-11-15T10:25:35.894Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:35.041 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:35.041 Nvme0n1 : 10.02 6427.75 25.11 0.00 0.00 19908.50 5689.72 87699.08 00:07:35.041 [2024-11-15T10:25:35.894Z] =================================================================================================================== 00:07:35.041 [2024-11-15T10:25:35.894Z] Total : 6427.75 25.11 0.00 0.00 19908.50 5689.72 87699.08 00:07:35.041 { 00:07:35.041 "results": [ 00:07:35.041 { 00:07:35.041 "job": "Nvme0n1", 00:07:35.041 "core_mask": "0x2", 00:07:35.041 "workload": "randwrite", 00:07:35.041 "status": "finished", 00:07:35.041 "queue_depth": 128, 00:07:35.041 "io_size": 4096, 00:07:35.041 "runtime": 10.016722, 00:07:35.041 "iops": 6427.751513918425, 00:07:35.041 "mibps": 25.10840435124385, 00:07:35.041 "io_failed": 0, 00:07:35.041 "io_timeout": 0, 00:07:35.041 "avg_latency_us": 19908.503335079462, 00:07:35.041 "min_latency_us": 5689.716363636364, 00:07:35.041 "max_latency_us": 87699.08363636363 00:07:35.041 } 00:07:35.041 ], 00:07:35.041 "core_count": 1 00:07:35.041 } 00:07:35.041 10:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63216 00:07:35.041 10:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 63216 ']' 00:07:35.041 10:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 63216 00:07:35.041 10:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:07:35.041 10:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:35.041 10:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63216 00:07:35.300 killing process with pid 63216 00:07:35.300 Received shutdown signal, test time was about 10.000000 seconds 00:07:35.300 00:07:35.301 Latency(us) 00:07:35.301 [2024-11-15T10:25:36.154Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:35.301 [2024-11-15T10:25:36.154Z] =================================================================================================================== 00:07:35.301 [2024-11-15T10:25:36.154Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:35.301 10:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:07:35.301 10:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:07:35.301 10:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63216' 00:07:35.301 10:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 63216 00:07:35.301 10:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 63216 00:07:35.301 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:35.560 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:36.128 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e0bfe4d2-b6ed-4b61-866b-8dd76c8976ae 00:07:36.128 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:36.386 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:36.386 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:36.386 10:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:36.386 [2024-11-15 10:25:37.219623] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:36.645 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e0bfe4d2-b6ed-4b61-866b-8dd76c8976ae 00:07:36.645 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:07:36.645 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e0bfe4d2-b6ed-4b61-866b-8dd76c8976ae 00:07:36.645 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:36.645 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:36.645 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:36.645 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:36.645 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:36.645 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:36.645 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:36.645 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:36.645 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e0bfe4d2-b6ed-4b61-866b-8dd76c8976ae 00:07:36.903 request: 00:07:36.904 { 00:07:36.904 "uuid": "e0bfe4d2-b6ed-4b61-866b-8dd76c8976ae", 00:07:36.904 "method": "bdev_lvol_get_lvstores", 00:07:36.904 "req_id": 1 00:07:36.904 } 00:07:36.904 Got JSON-RPC error response 00:07:36.904 response: 00:07:36.904 { 00:07:36.904 "code": -19, 00:07:36.904 "message": "No such device" 00:07:36.904 } 00:07:36.904 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:07:36.904 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:36.904 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:36.904 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:36.904 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:37.163 aio_bdev 00:07:37.163 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 49d879e1-d414-4869-8256-5b2e1a1c80df 00:07:37.163 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=49d879e1-d414-4869-8256-5b2e1a1c80df 00:07:37.163 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:37.163 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:07:37.163 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:37.163 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:37.163 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:37.421 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 49d879e1-d414-4869-8256-5b2e1a1c80df -t 2000 00:07:37.680 [ 00:07:37.680 { 00:07:37.680 "name": "49d879e1-d414-4869-8256-5b2e1a1c80df", 00:07:37.680 "aliases": [ 00:07:37.680 "lvs/lvol" 00:07:37.680 ], 00:07:37.680 "product_name": "Logical Volume", 00:07:37.680 "block_size": 4096, 00:07:37.680 "num_blocks": 38912, 00:07:37.680 "uuid": "49d879e1-d414-4869-8256-5b2e1a1c80df", 00:07:37.680 "assigned_rate_limits": { 00:07:37.680 "rw_ios_per_sec": 0, 00:07:37.680 "rw_mbytes_per_sec": 0, 00:07:37.680 "r_mbytes_per_sec": 0, 00:07:37.680 "w_mbytes_per_sec": 0 00:07:37.680 }, 00:07:37.680 "claimed": false, 00:07:37.680 "zoned": false, 00:07:37.680 "supported_io_types": { 00:07:37.680 "read": true, 00:07:37.680 "write": true, 00:07:37.680 "unmap": true, 00:07:37.680 "flush": false, 00:07:37.680 "reset": true, 00:07:37.680 "nvme_admin": false, 00:07:37.680 "nvme_io": false, 00:07:37.680 "nvme_io_md": false, 00:07:37.680 "write_zeroes": true, 00:07:37.680 "zcopy": false, 00:07:37.680 "get_zone_info": false, 00:07:37.680 "zone_management": false, 00:07:37.680 "zone_append": false, 00:07:37.680 "compare": false, 00:07:37.680 "compare_and_write": false, 00:07:37.680 "abort": false, 00:07:37.680 "seek_hole": true, 00:07:37.680 "seek_data": true, 00:07:37.680 "copy": false, 00:07:37.680 "nvme_iov_md": false 00:07:37.680 }, 00:07:37.680 "driver_specific": { 00:07:37.680 "lvol": { 00:07:37.680 "lvol_store_uuid": "e0bfe4d2-b6ed-4b61-866b-8dd76c8976ae", 00:07:37.680 "base_bdev": "aio_bdev", 00:07:37.680 "thin_provision": false, 00:07:37.680 "num_allocated_clusters": 38, 00:07:37.680 "snapshot": false, 00:07:37.680 "clone": false, 00:07:37.680 "esnap_clone": false 00:07:37.680 } 00:07:37.680 } 00:07:37.680 } 00:07:37.680 ] 00:07:37.680 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:07:37.680 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e0bfe4d2-b6ed-4b61-866b-8dd76c8976ae 00:07:37.680 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:38.247 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:38.247 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:38.247 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e0bfe4d2-b6ed-4b61-866b-8dd76c8976ae 00:07:38.247 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:38.247 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 49d879e1-d414-4869-8256-5b2e1a1c80df 00:07:38.814 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e0bfe4d2-b6ed-4b61-866b-8dd76c8976ae 00:07:39.073 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:39.331 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:39.590 ************************************ 00:07:39.590 END TEST lvs_grow_clean 00:07:39.590 ************************************ 00:07:39.590 00:07:39.590 real 0m19.031s 00:07:39.590 user 0m18.137s 00:07:39.590 sys 0m2.530s 00:07:39.590 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:39.590 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:39.912 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:39.912 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:39.912 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:39.912 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:39.912 ************************************ 00:07:39.912 START TEST lvs_grow_dirty 00:07:39.912 ************************************ 00:07:39.912 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:07:39.912 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:39.912 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:39.912 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:39.912 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:39.912 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:39.912 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:39.912 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:39.912 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:39.912 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:40.185 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:40.185 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:40.443 10:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=05bf16c6-0e2f-4870-8f5b-b79695218271 00:07:40.443 10:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 05bf16c6-0e2f-4870-8f5b-b79695218271 00:07:40.443 10:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:40.702 10:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:40.702 10:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:40.702 10:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 05bf16c6-0e2f-4870-8f5b-b79695218271 lvol 150 00:07:40.961 10:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=b84e5c19-094b-445c-a9b8-f515fa10946b 00:07:40.961 10:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:40.961 10:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:41.220 [2024-11-15 10:25:41.994133] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:41.220 [2024-11-15 10:25:41.994265] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:41.220 true 00:07:41.220 10:25:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 05bf16c6-0e2f-4870-8f5b-b79695218271 00:07:41.220 10:25:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:41.788 10:25:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:41.788 10:25:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:42.047 10:25:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b84e5c19-094b-445c-a9b8-f515fa10946b 00:07:42.305 10:25:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:07:42.563 [2024-11-15 10:25:43.318807] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:42.563 10:25:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:42.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:42.822 10:25:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63503 00:07:42.822 10:25:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:42.822 10:25:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:42.822 10:25:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63503 /var/tmp/bdevperf.sock 00:07:42.822 10:25:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 63503 ']' 00:07:42.822 10:25:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:42.822 10:25:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:42.822 10:25:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:42.822 10:25:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:42.822 10:25:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:42.822 [2024-11-15 10:25:43.670982] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:07:42.822 [2024-11-15 10:25:43.671426] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63503 ] 00:07:43.080 [2024-11-15 10:25:43.818956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.080 [2024-11-15 10:25:43.886898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:43.340 [2024-11-15 10:25:43.944071] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:43.908 10:25:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:43.908 10:25:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:07:43.908 10:25:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:44.475 Nvme0n1 00:07:44.475 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:44.475 [ 00:07:44.475 { 00:07:44.475 "name": "Nvme0n1", 00:07:44.475 "aliases": [ 00:07:44.475 "b84e5c19-094b-445c-a9b8-f515fa10946b" 00:07:44.475 ], 00:07:44.475 "product_name": "NVMe disk", 00:07:44.475 "block_size": 4096, 00:07:44.475 "num_blocks": 38912, 00:07:44.475 "uuid": "b84e5c19-094b-445c-a9b8-f515fa10946b", 00:07:44.475 "numa_id": -1, 00:07:44.475 "assigned_rate_limits": { 00:07:44.475 "rw_ios_per_sec": 0, 00:07:44.475 "rw_mbytes_per_sec": 0, 00:07:44.475 "r_mbytes_per_sec": 0, 00:07:44.475 "w_mbytes_per_sec": 0 00:07:44.475 }, 00:07:44.475 "claimed": false, 00:07:44.475 "zoned": false, 00:07:44.475 "supported_io_types": { 00:07:44.475 "read": true, 00:07:44.475 "write": true, 00:07:44.475 "unmap": true, 00:07:44.475 "flush": true, 00:07:44.475 "reset": true, 00:07:44.475 "nvme_admin": true, 00:07:44.475 "nvme_io": true, 00:07:44.475 "nvme_io_md": false, 00:07:44.475 "write_zeroes": true, 00:07:44.475 "zcopy": false, 00:07:44.475 "get_zone_info": false, 00:07:44.475 "zone_management": false, 00:07:44.475 "zone_append": false, 00:07:44.475 "compare": true, 00:07:44.475 "compare_and_write": true, 00:07:44.475 "abort": true, 00:07:44.475 "seek_hole": false, 00:07:44.475 "seek_data": false, 00:07:44.475 "copy": true, 00:07:44.475 "nvme_iov_md": false 00:07:44.475 }, 00:07:44.475 "memory_domains": [ 00:07:44.475 { 00:07:44.475 "dma_device_id": "system", 00:07:44.475 "dma_device_type": 1 00:07:44.475 } 00:07:44.475 ], 00:07:44.475 "driver_specific": { 00:07:44.475 "nvme": [ 00:07:44.475 { 00:07:44.475 "trid": { 00:07:44.475 "trtype": "TCP", 00:07:44.475 "adrfam": "IPv4", 00:07:44.475 "traddr": "10.0.0.3", 00:07:44.475 "trsvcid": "4420", 00:07:44.475 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:44.475 }, 00:07:44.475 "ctrlr_data": { 00:07:44.475 "cntlid": 1, 00:07:44.475 "vendor_id": "0x8086", 00:07:44.475 "model_number": "SPDK bdev Controller", 00:07:44.475 "serial_number": "SPDK0", 00:07:44.475 "firmware_revision": "25.01", 00:07:44.476 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:44.476 "oacs": { 00:07:44.476 "security": 0, 00:07:44.476 "format": 0, 00:07:44.476 "firmware": 0, 00:07:44.476 "ns_manage": 0 00:07:44.476 }, 00:07:44.476 "multi_ctrlr": true, 00:07:44.476 "ana_reporting": false 00:07:44.476 }, 00:07:44.476 "vs": { 00:07:44.476 "nvme_version": "1.3" 00:07:44.476 }, 00:07:44.476 "ns_data": { 00:07:44.476 "id": 1, 00:07:44.476 "can_share": true 00:07:44.476 } 00:07:44.476 } 00:07:44.476 ], 00:07:44.476 "mp_policy": "active_passive" 00:07:44.476 } 00:07:44.476 } 00:07:44.476 ] 00:07:44.735 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:44.735 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63526 00:07:44.736 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:44.736 Running I/O for 10 seconds... 00:07:45.695 Latency(us) 00:07:45.695 [2024-11-15T10:25:46.548Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:45.695 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:45.695 Nvme0n1 : 1.00 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:07:45.695 [2024-11-15T10:25:46.548Z] =================================================================================================================== 00:07:45.695 [2024-11-15T10:25:46.548Z] Total : 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:07:45.695 00:07:46.629 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 05bf16c6-0e2f-4870-8f5b-b79695218271 00:07:46.629 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:46.629 Nvme0n1 : 2.00 7048.50 27.53 0.00 0.00 0.00 0.00 0.00 00:07:46.629 [2024-11-15T10:25:47.482Z] =================================================================================================================== 00:07:46.629 [2024-11-15T10:25:47.482Z] Total : 7048.50 27.53 0.00 0.00 0.00 0.00 0.00 00:07:46.629 00:07:46.887 true 00:07:46.888 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:46.888 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 05bf16c6-0e2f-4870-8f5b-b79695218271 00:07:47.146 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:47.146 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:47.146 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 63526 00:07:47.713 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:47.713 Nvme0n1 : 3.00 7069.67 27.62 0.00 0.00 0.00 0.00 0.00 00:07:47.713 [2024-11-15T10:25:48.566Z] =================================================================================================================== 00:07:47.713 [2024-11-15T10:25:48.566Z] Total : 7069.67 27.62 0.00 0.00 0.00 0.00 0.00 00:07:47.713 00:07:48.649 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:48.649 Nvme0n1 : 4.00 7016.75 27.41 0.00 0.00 0.00 0.00 0.00 00:07:48.649 [2024-11-15T10:25:49.502Z] =================================================================================================================== 00:07:48.649 [2024-11-15T10:25:49.502Z] Total : 7016.75 27.41 0.00 0.00 0.00 0.00 0.00 00:07:48.649 00:07:50.024 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:50.024 Nvme0n1 : 5.00 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:07:50.024 [2024-11-15T10:25:50.877Z] =================================================================================================================== 00:07:50.024 [2024-11-15T10:25:50.877Z] Total : 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:07:50.024 00:07:50.959 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:50.959 Nvme0n1 : 6.00 6920.50 27.03 0.00 0.00 0.00 0.00 0.00 00:07:50.959 [2024-11-15T10:25:51.812Z] =================================================================================================================== 00:07:50.959 [2024-11-15T10:25:51.812Z] Total : 6920.50 27.03 0.00 0.00 0.00 0.00 0.00 00:07:50.959 00:07:51.988 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:51.988 Nvme0n1 : 7.00 6875.29 26.86 0.00 0.00 0.00 0.00 0.00 00:07:51.988 [2024-11-15T10:25:52.841Z] =================================================================================================================== 00:07:51.988 [2024-11-15T10:25:52.841Z] Total : 6875.29 26.86 0.00 0.00 0.00 0.00 0.00 00:07:51.988 00:07:52.924 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:52.924 Nvme0n1 : 8.00 6873.12 26.85 0.00 0.00 0.00 0.00 0.00 00:07:52.924 [2024-11-15T10:25:53.777Z] =================================================================================================================== 00:07:52.924 [2024-11-15T10:25:53.777Z] Total : 6873.12 26.85 0.00 0.00 0.00 0.00 0.00 00:07:52.924 00:07:53.857 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:53.857 Nvme0n1 : 9.00 6843.22 26.73 0.00 0.00 0.00 0.00 0.00 00:07:53.857 [2024-11-15T10:25:54.710Z] =================================================================================================================== 00:07:53.857 [2024-11-15T10:25:54.710Z] Total : 6843.22 26.73 0.00 0.00 0.00 0.00 0.00 00:07:53.857 00:07:54.790 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:54.790 Nvme0n1 : 10.00 6806.60 26.59 0.00 0.00 0.00 0.00 0.00 00:07:54.790 [2024-11-15T10:25:55.643Z] =================================================================================================================== 00:07:54.790 [2024-11-15T10:25:55.643Z] Total : 6806.60 26.59 0.00 0.00 0.00 0.00 0.00 00:07:54.790 00:07:54.790 00:07:54.790 Latency(us) 00:07:54.790 [2024-11-15T10:25:55.643Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:54.790 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:54.790 Nvme0n1 : 10.02 6808.28 26.59 0.00 0.00 18796.16 9234.62 50760.61 00:07:54.790 [2024-11-15T10:25:55.643Z] =================================================================================================================== 00:07:54.790 [2024-11-15T10:25:55.643Z] Total : 6808.28 26.59 0.00 0.00 18796.16 9234.62 50760.61 00:07:54.790 { 00:07:54.790 "results": [ 00:07:54.790 { 00:07:54.790 "job": "Nvme0n1", 00:07:54.790 "core_mask": "0x2", 00:07:54.790 "workload": "randwrite", 00:07:54.790 "status": "finished", 00:07:54.790 "queue_depth": 128, 00:07:54.790 "io_size": 4096, 00:07:54.790 "runtime": 10.016336, 00:07:54.790 "iops": 6808.277997063996, 00:07:54.790 "mibps": 26.594835926031234, 00:07:54.790 "io_failed": 0, 00:07:54.790 "io_timeout": 0, 00:07:54.790 "avg_latency_us": 18796.159339637983, 00:07:54.790 "min_latency_us": 9234.618181818181, 00:07:54.790 "max_latency_us": 50760.61090909091 00:07:54.790 } 00:07:54.790 ], 00:07:54.790 "core_count": 1 00:07:54.790 } 00:07:54.790 10:25:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63503 00:07:54.790 10:25:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 63503 ']' 00:07:54.790 10:25:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 63503 00:07:54.790 10:25:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:07:54.790 10:25:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:54.790 10:25:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63503 00:07:54.790 10:25:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:07:54.790 killing process with pid 63503 00:07:54.790 10:25:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:07:54.790 10:25:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63503' 00:07:54.790 Received shutdown signal, test time was about 10.000000 seconds 00:07:54.790 00:07:54.790 Latency(us) 00:07:54.790 [2024-11-15T10:25:55.643Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:54.790 [2024-11-15T10:25:55.643Z] =================================================================================================================== 00:07:54.790 [2024-11-15T10:25:55.643Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:54.790 10:25:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 63503 00:07:54.790 10:25:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 63503 00:07:55.048 10:25:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:55.308 10:25:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:55.874 10:25:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:55.874 10:25:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 05bf16c6-0e2f-4870-8f5b-b79695218271 00:07:56.132 10:25:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:56.132 10:25:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:56.132 10:25:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 63128 00:07:56.132 10:25:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 63128 00:07:56.132 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 63128 Killed "${NVMF_APP[@]}" "$@" 00:07:56.132 10:25:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:56.132 10:25:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:56.132 10:25:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:56.132 10:25:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:56.132 10:25:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:56.132 10:25:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:56.132 10:25:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=63664 00:07:56.132 10:25:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 63664 00:07:56.132 10:25:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 63664 ']' 00:07:56.132 10:25:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.132 10:25:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:56.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.132 10:25:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.132 10:25:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:56.132 10:25:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:56.133 [2024-11-15 10:25:56.858667] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:07:56.133 [2024-11-15 10:25:56.858768] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:56.391 [2024-11-15 10:25:57.007586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.391 [2024-11-15 10:25:57.058363] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:56.391 [2024-11-15 10:25:57.058417] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:56.391 [2024-11-15 10:25:57.058442] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:56.391 [2024-11-15 10:25:57.058450] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:56.391 [2024-11-15 10:25:57.058456] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:56.391 [2024-11-15 10:25:57.058784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.391 [2024-11-15 10:25:57.107927] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:56.391 10:25:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:56.391 10:25:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:07:56.391 10:25:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:56.391 10:25:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:56.391 10:25:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:56.391 10:25:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:56.391 10:25:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:56.649 [2024-11-15 10:25:57.493979] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:56.649 [2024-11-15 10:25:57.494508] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:56.649 [2024-11-15 10:25:57.495204] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:56.907 10:25:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:56.907 10:25:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev b84e5c19-094b-445c-a9b8-f515fa10946b 00:07:56.907 10:25:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=b84e5c19-094b-445c-a9b8-f515fa10946b 00:07:56.907 10:25:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:56.907 10:25:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:07:56.907 10:25:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:56.907 10:25:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:56.907 10:25:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:57.165 10:25:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b84e5c19-094b-445c-a9b8-f515fa10946b -t 2000 00:07:57.425 [ 00:07:57.425 { 00:07:57.425 "name": "b84e5c19-094b-445c-a9b8-f515fa10946b", 00:07:57.425 "aliases": [ 00:07:57.425 "lvs/lvol" 00:07:57.425 ], 00:07:57.425 "product_name": "Logical Volume", 00:07:57.425 "block_size": 4096, 00:07:57.425 "num_blocks": 38912, 00:07:57.425 "uuid": "b84e5c19-094b-445c-a9b8-f515fa10946b", 00:07:57.425 "assigned_rate_limits": { 00:07:57.425 "rw_ios_per_sec": 0, 00:07:57.425 "rw_mbytes_per_sec": 0, 00:07:57.425 "r_mbytes_per_sec": 0, 00:07:57.425 "w_mbytes_per_sec": 0 00:07:57.425 }, 00:07:57.425 "claimed": false, 00:07:57.425 "zoned": false, 00:07:57.425 "supported_io_types": { 00:07:57.425 "read": true, 00:07:57.425 "write": true, 00:07:57.425 "unmap": true, 00:07:57.425 "flush": false, 00:07:57.425 "reset": true, 00:07:57.425 "nvme_admin": false, 00:07:57.425 "nvme_io": false, 00:07:57.425 "nvme_io_md": false, 00:07:57.425 "write_zeroes": true, 00:07:57.425 "zcopy": false, 00:07:57.425 "get_zone_info": false, 00:07:57.425 "zone_management": false, 00:07:57.425 "zone_append": false, 00:07:57.425 "compare": false, 00:07:57.425 "compare_and_write": false, 00:07:57.425 "abort": false, 00:07:57.425 "seek_hole": true, 00:07:57.425 "seek_data": true, 00:07:57.425 "copy": false, 00:07:57.425 "nvme_iov_md": false 00:07:57.425 }, 00:07:57.425 "driver_specific": { 00:07:57.425 "lvol": { 00:07:57.425 "lvol_store_uuid": "05bf16c6-0e2f-4870-8f5b-b79695218271", 00:07:57.425 "base_bdev": "aio_bdev", 00:07:57.425 "thin_provision": false, 00:07:57.425 "num_allocated_clusters": 38, 00:07:57.425 "snapshot": false, 00:07:57.425 "clone": false, 00:07:57.425 "esnap_clone": false 00:07:57.425 } 00:07:57.425 } 00:07:57.425 } 00:07:57.425 ] 00:07:57.425 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:07:57.425 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 05bf16c6-0e2f-4870-8f5b-b79695218271 00:07:57.425 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:57.684 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:57.684 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 05bf16c6-0e2f-4870-8f5b-b79695218271 00:07:57.684 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:57.942 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:57.942 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:58.200 [2024-11-15 10:25:58.843751] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:58.200 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 05bf16c6-0e2f-4870-8f5b-b79695218271 00:07:58.200 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:07:58.200 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 05bf16c6-0e2f-4870-8f5b-b79695218271 00:07:58.200 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:58.200 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:58.200 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:58.200 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:58.200 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:58.200 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:58.200 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:58.200 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:58.201 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 05bf16c6-0e2f-4870-8f5b-b79695218271 00:07:58.459 request: 00:07:58.459 { 00:07:58.459 "uuid": "05bf16c6-0e2f-4870-8f5b-b79695218271", 00:07:58.459 "method": "bdev_lvol_get_lvstores", 00:07:58.459 "req_id": 1 00:07:58.459 } 00:07:58.459 Got JSON-RPC error response 00:07:58.459 response: 00:07:58.459 { 00:07:58.459 "code": -19, 00:07:58.459 "message": "No such device" 00:07:58.459 } 00:07:58.459 10:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:07:58.459 10:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:58.459 10:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:58.459 10:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:58.459 10:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:58.717 aio_bdev 00:07:58.717 10:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b84e5c19-094b-445c-a9b8-f515fa10946b 00:07:58.717 10:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=b84e5c19-094b-445c-a9b8-f515fa10946b 00:07:58.717 10:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:58.717 10:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:07:58.717 10:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:58.717 10:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:58.718 10:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:58.976 10:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b84e5c19-094b-445c-a9b8-f515fa10946b -t 2000 00:07:59.235 [ 00:07:59.235 { 00:07:59.235 "name": "b84e5c19-094b-445c-a9b8-f515fa10946b", 00:07:59.235 "aliases": [ 00:07:59.235 "lvs/lvol" 00:07:59.235 ], 00:07:59.235 "product_name": "Logical Volume", 00:07:59.235 "block_size": 4096, 00:07:59.235 "num_blocks": 38912, 00:07:59.235 "uuid": "b84e5c19-094b-445c-a9b8-f515fa10946b", 00:07:59.235 "assigned_rate_limits": { 00:07:59.235 "rw_ios_per_sec": 0, 00:07:59.235 "rw_mbytes_per_sec": 0, 00:07:59.235 "r_mbytes_per_sec": 0, 00:07:59.235 "w_mbytes_per_sec": 0 00:07:59.235 }, 00:07:59.235 "claimed": false, 00:07:59.235 "zoned": false, 00:07:59.235 "supported_io_types": { 00:07:59.235 "read": true, 00:07:59.235 "write": true, 00:07:59.235 "unmap": true, 00:07:59.235 "flush": false, 00:07:59.235 "reset": true, 00:07:59.235 "nvme_admin": false, 00:07:59.235 "nvme_io": false, 00:07:59.235 "nvme_io_md": false, 00:07:59.235 "write_zeroes": true, 00:07:59.235 "zcopy": false, 00:07:59.235 "get_zone_info": false, 00:07:59.235 "zone_management": false, 00:07:59.235 "zone_append": false, 00:07:59.235 "compare": false, 00:07:59.235 "compare_and_write": false, 00:07:59.235 "abort": false, 00:07:59.235 "seek_hole": true, 00:07:59.235 "seek_data": true, 00:07:59.235 "copy": false, 00:07:59.235 "nvme_iov_md": false 00:07:59.235 }, 00:07:59.235 "driver_specific": { 00:07:59.235 "lvol": { 00:07:59.235 "lvol_store_uuid": "05bf16c6-0e2f-4870-8f5b-b79695218271", 00:07:59.235 "base_bdev": "aio_bdev", 00:07:59.235 "thin_provision": false, 00:07:59.235 "num_allocated_clusters": 38, 00:07:59.235 "snapshot": false, 00:07:59.235 "clone": false, 00:07:59.235 "esnap_clone": false 00:07:59.235 } 00:07:59.235 } 00:07:59.235 } 00:07:59.235 ] 00:07:59.235 10:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:07:59.235 10:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 05bf16c6-0e2f-4870-8f5b-b79695218271 00:07:59.235 10:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:59.493 10:26:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:59.493 10:26:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 05bf16c6-0e2f-4870-8f5b-b79695218271 00:07:59.493 10:26:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:59.751 10:26:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:59.751 10:26:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete b84e5c19-094b-445c-a9b8-f515fa10946b 00:08:00.009 10:26:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 05bf16c6-0e2f-4870-8f5b-b79695218271 00:08:00.267 10:26:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:00.524 10:26:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:01.087 ************************************ 00:08:01.087 END TEST lvs_grow_dirty 00:08:01.087 ************************************ 00:08:01.087 00:08:01.087 real 0m21.283s 00:08:01.087 user 0m45.414s 00:08:01.087 sys 0m8.626s 00:08:01.087 10:26:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:01.087 10:26:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:01.088 10:26:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:01.088 10:26:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:08:01.088 10:26:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:08:01.088 10:26:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:08:01.088 10:26:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:01.088 10:26:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:08:01.088 10:26:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:08:01.088 10:26:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:08:01.088 10:26:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:01.088 nvmf_trace.0 00:08:01.088 10:26:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:08:01.088 10:26:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:01.088 10:26:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:01.088 10:26:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:01.654 10:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:01.654 10:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:01.654 10:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:01.654 10:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:01.654 rmmod nvme_tcp 00:08:01.654 rmmod nvme_fabrics 00:08:01.654 rmmod nvme_keyring 00:08:01.654 10:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:01.654 10:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:01.654 10:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:01.654 10:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 63664 ']' 00:08:01.654 10:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 63664 00:08:01.654 10:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 63664 ']' 00:08:01.654 10:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 63664 00:08:01.654 10:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:08:01.654 10:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:01.654 10:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63664 00:08:01.654 killing process with pid 63664 00:08:01.654 10:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:01.654 10:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:01.654 10:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63664' 00:08:01.654 10:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 63664 00:08:01.654 10:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 63664 00:08:01.913 10:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:01.913 10:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:01.913 10:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:01.913 10:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:01.913 10:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:01.913 10:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:01.913 10:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:01.913 10:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:01.913 10:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:01.913 10:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:01.913 10:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:01.913 10:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:01.913 10:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:01.913 10:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:01.913 10:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:01.913 10:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:01.913 10:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:01.913 10:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:01.913 10:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:01.913 10:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:01.913 10:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:01.913 10:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:01.913 10:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:01.913 10:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:01.913 10:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:01.913 10:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:02.173 10:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:08:02.173 00:08:02.173 real 0m43.514s 00:08:02.173 user 1m10.008s 00:08:02.173 sys 0m12.205s 00:08:02.173 10:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:02.173 10:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:02.173 ************************************ 00:08:02.173 END TEST nvmf_lvs_grow 00:08:02.173 ************************************ 00:08:02.173 10:26:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:02.173 10:26:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:02.173 10:26:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:02.173 10:26:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:02.173 ************************************ 00:08:02.173 START TEST nvmf_bdev_io_wait 00:08:02.173 ************************************ 00:08:02.173 10:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:02.173 * Looking for test storage... 00:08:02.173 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:02.173 10:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:02.173 10:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:08:02.173 10:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:02.173 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:02.173 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:02.173 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:02.173 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:02.173 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:02.173 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:02.173 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:02.173 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:02.173 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:02.173 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:02.173 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:02.173 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:02.173 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:02.173 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:02.173 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:02.173 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:02.173 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:02.433 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:02.433 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:02.433 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:02.433 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:02.433 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:02.433 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:02.433 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:02.433 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:02.433 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:02.433 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:02.433 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:02.433 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:02.433 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:02.433 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:02.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.433 --rc genhtml_branch_coverage=1 00:08:02.433 --rc genhtml_function_coverage=1 00:08:02.433 --rc genhtml_legend=1 00:08:02.433 --rc geninfo_all_blocks=1 00:08:02.433 --rc geninfo_unexecuted_blocks=1 00:08:02.433 00:08:02.433 ' 00:08:02.433 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:02.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.433 --rc genhtml_branch_coverage=1 00:08:02.433 --rc genhtml_function_coverage=1 00:08:02.433 --rc genhtml_legend=1 00:08:02.433 --rc geninfo_all_blocks=1 00:08:02.433 --rc geninfo_unexecuted_blocks=1 00:08:02.433 00:08:02.433 ' 00:08:02.433 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:02.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.433 --rc genhtml_branch_coverage=1 00:08:02.433 --rc genhtml_function_coverage=1 00:08:02.433 --rc genhtml_legend=1 00:08:02.433 --rc geninfo_all_blocks=1 00:08:02.433 --rc geninfo_unexecuted_blocks=1 00:08:02.433 00:08:02.433 ' 00:08:02.433 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:02.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.433 --rc genhtml_branch_coverage=1 00:08:02.433 --rc genhtml_function_coverage=1 00:08:02.433 --rc genhtml_legend=1 00:08:02.433 --rc geninfo_all_blocks=1 00:08:02.433 --rc geninfo_unexecuted_blocks=1 00:08:02.433 00:08:02.433 ' 00:08:02.433 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:02.434 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:02.434 Cannot find device "nvmf_init_br" 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:02.434 Cannot find device "nvmf_init_br2" 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:02.434 Cannot find device "nvmf_tgt_br" 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:02.434 Cannot find device "nvmf_tgt_br2" 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:02.434 Cannot find device "nvmf_init_br" 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:02.434 Cannot find device "nvmf_init_br2" 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:02.434 Cannot find device "nvmf_tgt_br" 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:02.434 Cannot find device "nvmf_tgt_br2" 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:02.434 Cannot find device "nvmf_br" 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:08:02.434 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:02.435 Cannot find device "nvmf_init_if" 00:08:02.435 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:08:02.435 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:02.435 Cannot find device "nvmf_init_if2" 00:08:02.435 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:08:02.435 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:02.435 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:02.435 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:08:02.435 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:02.435 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:02.435 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:08:02.435 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:02.435 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:02.435 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:02.435 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:02.435 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:02.435 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:02.694 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:02.694 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:02.694 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:02.694 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:02.694 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:02.694 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:02.694 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:02.694 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:02.694 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:02.694 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:02.694 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:02.694 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:02.694 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:02.694 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:02.694 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:02.694 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:02.694 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:02.694 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:02.694 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:02.694 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:02.694 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:02.694 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:02.694 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:02.694 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:02.694 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:02.694 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:02.694 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:02.694 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:02.694 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:08:02.694 00:08:02.694 --- 10.0.0.3 ping statistics --- 00:08:02.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.694 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:08:02.694 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:02.694 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:02.694 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:08:02.694 00:08:02.694 --- 10.0.0.4 ping statistics --- 00:08:02.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.694 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:08:02.694 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:02.694 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:02.694 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:08:02.694 00:08:02.694 --- 10.0.0.1 ping statistics --- 00:08:02.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.694 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:08:02.694 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:02.694 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:02.694 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:08:02.694 00:08:02.694 --- 10.0.0.2 ping statistics --- 00:08:02.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.694 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:08:02.694 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:02.694 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:08:02.694 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:02.694 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:02.694 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:02.694 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:02.694 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:02.694 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:02.694 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:02.694 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:02.695 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:02.695 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:02.695 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:02.695 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=64029 00:08:02.695 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:02.695 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 64029 00:08:02.695 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 64029 ']' 00:08:02.695 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.695 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:02.695 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.695 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:02.695 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:02.695 [2024-11-15 10:26:03.541475] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:08:02.695 [2024-11-15 10:26:03.541881] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:02.953 [2024-11-15 10:26:03.696857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:02.953 [2024-11-15 10:26:03.775099] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:02.953 [2024-11-15 10:26:03.775394] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:02.953 [2024-11-15 10:26:03.775562] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:02.953 [2024-11-15 10:26:03.775722] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:02.953 [2024-11-15 10:26:03.775786] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:02.953 [2024-11-15 10:26:03.777169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:02.953 [2024-11-15 10:26:03.777306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:02.953 [2024-11-15 10:26:03.777925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:02.953 [2024-11-15 10:26:03.777963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.213 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:03.213 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:08:03.213 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:03.213 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:03.213 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:03.213 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:03.213 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:03.213 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.213 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:03.213 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.213 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:03.213 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.213 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:03.213 [2024-11-15 10:26:03.915015] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:03.213 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.213 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:03.213 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.213 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:03.213 [2024-11-15 10:26:03.931502] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:03.213 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.213 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:03.213 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.213 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:03.213 Malloc0 00:08:03.213 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.213 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:03.213 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.213 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:03.213 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.213 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:03.213 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.213 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:03.213 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.213 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:03.213 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.213 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:03.213 [2024-11-15 10:26:03.991659] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:03.213 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.213 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=64055 00:08:03.213 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:03.213 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:03.213 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:03.213 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:03.213 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:03.213 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:03.213 { 00:08:03.213 "params": { 00:08:03.213 "name": "Nvme$subsystem", 00:08:03.213 "trtype": "$TEST_TRANSPORT", 00:08:03.213 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:03.213 "adrfam": "ipv4", 00:08:03.213 "trsvcid": "$NVMF_PORT", 00:08:03.213 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:03.213 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:03.213 "hdgst": ${hdgst:-false}, 00:08:03.213 "ddgst": ${ddgst:-false} 00:08:03.213 }, 00:08:03.213 "method": "bdev_nvme_attach_controller" 00:08:03.213 } 00:08:03.213 EOF 00:08:03.213 )") 00:08:03.213 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=64057 00:08:03.213 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:03.213 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:03.213 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:03.213 10:26:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:03.213 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=64060 00:08:03.213 10:26:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:03.213 10:26:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:03.213 10:26:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:03.213 { 00:08:03.213 "params": { 00:08:03.213 "name": "Nvme$subsystem", 00:08:03.213 "trtype": "$TEST_TRANSPORT", 00:08:03.213 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:03.213 "adrfam": "ipv4", 00:08:03.213 "trsvcid": "$NVMF_PORT", 00:08:03.213 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:03.213 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:03.213 "hdgst": ${hdgst:-false}, 00:08:03.213 "ddgst": ${ddgst:-false} 00:08:03.213 }, 00:08:03.213 "method": "bdev_nvme_attach_controller" 00:08:03.213 } 00:08:03.213 EOF 00:08:03.213 )") 00:08:03.213 10:26:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:03.213 10:26:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:03.213 10:26:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=64061 00:08:03.213 10:26:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:03.213 10:26:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:03.213 10:26:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:03.213 10:26:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:03.213 10:26:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:03.213 10:26:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:03.213 "params": { 00:08:03.213 "name": "Nvme1", 00:08:03.213 "trtype": "tcp", 00:08:03.213 "traddr": "10.0.0.3", 00:08:03.213 "adrfam": "ipv4", 00:08:03.213 "trsvcid": "4420", 00:08:03.213 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:03.213 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:03.214 "hdgst": false, 00:08:03.214 "ddgst": false 00:08:03.214 }, 00:08:03.214 "method": "bdev_nvme_attach_controller" 00:08:03.214 }' 00:08:03.214 10:26:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:03.214 10:26:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:03.214 10:26:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:03.214 10:26:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:03.214 10:26:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:03.214 { 00:08:03.214 "params": { 00:08:03.214 "name": "Nvme$subsystem", 00:08:03.214 "trtype": "$TEST_TRANSPORT", 00:08:03.214 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:03.214 "adrfam": "ipv4", 00:08:03.214 "trsvcid": "$NVMF_PORT", 00:08:03.214 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:03.214 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:03.214 "hdgst": ${hdgst:-false}, 00:08:03.214 "ddgst": ${ddgst:-false} 00:08:03.214 }, 00:08:03.214 "method": "bdev_nvme_attach_controller" 00:08:03.214 } 00:08:03.214 EOF 00:08:03.214 )") 00:08:03.214 10:26:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:03.214 10:26:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:03.214 10:26:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:03.214 10:26:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:03.214 10:26:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:03.214 10:26:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:03.214 { 00:08:03.214 "params": { 00:08:03.214 "name": "Nvme$subsystem", 00:08:03.214 "trtype": "$TEST_TRANSPORT", 00:08:03.214 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:03.214 "adrfam": "ipv4", 00:08:03.214 "trsvcid": "$NVMF_PORT", 00:08:03.214 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:03.214 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:03.214 "hdgst": ${hdgst:-false}, 00:08:03.214 "ddgst": ${ddgst:-false} 00:08:03.214 }, 00:08:03.214 "method": "bdev_nvme_attach_controller" 00:08:03.214 } 00:08:03.214 EOF 00:08:03.214 )") 00:08:03.214 10:26:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:03.214 10:26:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:03.214 "params": { 00:08:03.214 "name": "Nvme1", 00:08:03.214 "trtype": "tcp", 00:08:03.214 "traddr": "10.0.0.3", 00:08:03.214 "adrfam": "ipv4", 00:08:03.214 "trsvcid": "4420", 00:08:03.214 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:03.214 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:03.214 "hdgst": false, 00:08:03.214 "ddgst": false 00:08:03.214 }, 00:08:03.214 "method": "bdev_nvme_attach_controller" 00:08:03.214 }' 00:08:03.214 10:26:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:03.214 10:26:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:03.214 10:26:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:03.214 10:26:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:03.214 "params": { 00:08:03.214 "name": "Nvme1", 00:08:03.214 "trtype": "tcp", 00:08:03.214 "traddr": "10.0.0.3", 00:08:03.214 "adrfam": "ipv4", 00:08:03.214 "trsvcid": "4420", 00:08:03.214 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:03.214 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:03.214 "hdgst": false, 00:08:03.214 "ddgst": false 00:08:03.214 }, 00:08:03.214 "method": "bdev_nvme_attach_controller" 00:08:03.214 }' 00:08:03.214 10:26:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:03.214 10:26:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:03.214 10:26:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:03.214 "params": { 00:08:03.214 "name": "Nvme1", 00:08:03.214 "trtype": "tcp", 00:08:03.214 "traddr": "10.0.0.3", 00:08:03.214 "adrfam": "ipv4", 00:08:03.214 "trsvcid": "4420", 00:08:03.214 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:03.214 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:03.214 "hdgst": false, 00:08:03.214 "ddgst": false 00:08:03.214 }, 00:08:03.214 "method": "bdev_nvme_attach_controller" 00:08:03.214 }' 00:08:03.214 10:26:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 64055 00:08:03.472 [2024-11-15 10:26:04.082288] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:08:03.473 [2024-11-15 10:26:04.082640] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:03.473 [2024-11-15 10:26:04.102448] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:08:03.473 [2024-11-15 10:26:04.102533] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:03.473 [2024-11-15 10:26:04.107860] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:08:03.473 [2024-11-15 10:26:04.107971] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:03.473 [2024-11-15 10:26:04.117627] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:08:03.473 [2024-11-15 10:26:04.117853] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:03.473 [2024-11-15 10:26:04.317885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.730 [2024-11-15 10:26:04.374402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:03.730 [2024-11-15 10:26:04.385168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.730 [2024-11-15 10:26:04.388575] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:03.730 [2024-11-15 10:26:04.441764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:03.730 [2024-11-15 10:26:04.461610] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:03.730 [2024-11-15 10:26:04.477421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.730 Running I/O for 1 seconds... 00:08:03.730 [2024-11-15 10:26:04.541785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.730 [2024-11-15 10:26:04.550869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:03.730 [2024-11-15 10:26:04.569166] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:03.989 Running I/O for 1 seconds... 00:08:03.989 [2024-11-15 10:26:04.602285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:03.989 [2024-11-15 10:26:04.616709] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:03.989 Running I/O for 1 seconds... 00:08:03.989 Running I/O for 1 seconds... 00:08:04.924 6868.00 IOPS, 26.83 MiB/s 00:08:04.924 Latency(us) 00:08:04.924 [2024-11-15T10:26:05.777Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:04.924 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:04.924 Nvme1n1 : 1.03 6806.96 26.59 0.00 0.00 18530.19 5153.51 34078.72 00:08:04.924 [2024-11-15T10:26:05.777Z] =================================================================================================================== 00:08:04.924 [2024-11-15T10:26:05.777Z] Total : 6806.96 26.59 0.00 0.00 18530.19 5153.51 34078.72 00:08:04.924 172232.00 IOPS, 672.78 MiB/s 00:08:04.924 Latency(us) 00:08:04.924 [2024-11-15T10:26:05.778Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:04.925 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:04.925 Nvme1n1 : 1.00 171860.87 671.33 0.00 0.00 740.78 377.95 2159.71 00:08:04.925 [2024-11-15T10:26:05.778Z] =================================================================================================================== 00:08:04.925 [2024-11-15T10:26:05.778Z] Total : 171860.87 671.33 0.00 0.00 740.78 377.95 2159.71 00:08:04.925 7562.00 IOPS, 29.54 MiB/s 00:08:04.925 Latency(us) 00:08:04.925 [2024-11-15T10:26:05.778Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:04.925 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:04.925 Nvme1n1 : 1.01 7604.98 29.71 0.00 0.00 16730.54 9770.82 29312.47 00:08:04.925 [2024-11-15T10:26:05.778Z] =================================================================================================================== 00:08:04.925 [2024-11-15T10:26:05.778Z] Total : 7604.98 29.71 0.00 0.00 16730.54 9770.82 29312.47 00:08:04.925 10:26:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 64057 00:08:04.925 6715.00 IOPS, 26.23 MiB/s 00:08:04.925 Latency(us) 00:08:04.925 [2024-11-15T10:26:05.778Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:04.925 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:04.925 Nvme1n1 : 1.01 6849.64 26.76 0.00 0.00 18625.14 5242.88 46232.67 00:08:04.925 [2024-11-15T10:26:05.778Z] =================================================================================================================== 00:08:04.925 [2024-11-15T10:26:05.778Z] Total : 6849.64 26.76 0.00 0.00 18625.14 5242.88 46232.67 00:08:05.183 10:26:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 64060 00:08:05.183 10:26:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 64061 00:08:05.183 10:26:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:05.183 10:26:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.183 10:26:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:05.183 10:26:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.183 10:26:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:05.183 10:26:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:05.183 10:26:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:05.183 10:26:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:05.183 10:26:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:05.183 10:26:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:05.183 10:26:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:05.184 10:26:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:05.184 rmmod nvme_tcp 00:08:05.184 rmmod nvme_fabrics 00:08:05.184 rmmod nvme_keyring 00:08:05.184 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:05.184 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:05.184 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:05.184 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 64029 ']' 00:08:05.184 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 64029 00:08:05.184 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 64029 ']' 00:08:05.184 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 64029 00:08:05.184 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:08:05.184 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:05.184 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 64029 00:08:05.442 killing process with pid 64029 00:08:05.442 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:05.443 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:05.443 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 64029' 00:08:05.443 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 64029 00:08:05.443 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 64029 00:08:05.443 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:05.443 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:05.443 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:05.443 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:05.443 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:05.443 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:05.443 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:05.443 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:05.443 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:05.443 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:05.443 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:05.443 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:05.443 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:05.706 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:05.706 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:05.706 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:05.706 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:05.706 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:05.706 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:05.706 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:05.706 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:05.706 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:05.706 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:05.706 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:05.706 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:05.706 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:05.706 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:08:05.706 00:08:05.706 real 0m3.621s 00:08:05.706 user 0m14.551s 00:08:05.706 sys 0m2.254s 00:08:05.706 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:05.706 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:05.706 ************************************ 00:08:05.706 END TEST nvmf_bdev_io_wait 00:08:05.706 ************************************ 00:08:05.706 10:26:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:05.706 10:26:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:05.706 10:26:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:05.706 10:26:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:05.706 ************************************ 00:08:05.706 START TEST nvmf_queue_depth 00:08:05.706 ************************************ 00:08:05.706 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:05.966 * Looking for test storage... 00:08:05.966 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:05.966 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:05.966 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:08:05.966 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:05.966 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:05.966 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:05.966 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:05.966 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:05.966 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:05.966 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:05.966 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:05.966 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:05.966 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:05.966 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:05.966 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:05.966 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:05.966 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:05.966 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:05.966 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:05.966 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:05.966 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:05.966 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:05.966 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:05.966 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:05.966 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:05.966 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:05.966 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:05.966 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:05.966 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:05.966 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:05.966 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:05.966 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:05.966 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:05.966 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:05.966 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:05.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.967 --rc genhtml_branch_coverage=1 00:08:05.967 --rc genhtml_function_coverage=1 00:08:05.967 --rc genhtml_legend=1 00:08:05.967 --rc geninfo_all_blocks=1 00:08:05.967 --rc geninfo_unexecuted_blocks=1 00:08:05.967 00:08:05.967 ' 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:05.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.967 --rc genhtml_branch_coverage=1 00:08:05.967 --rc genhtml_function_coverage=1 00:08:05.967 --rc genhtml_legend=1 00:08:05.967 --rc geninfo_all_blocks=1 00:08:05.967 --rc geninfo_unexecuted_blocks=1 00:08:05.967 00:08:05.967 ' 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:05.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.967 --rc genhtml_branch_coverage=1 00:08:05.967 --rc genhtml_function_coverage=1 00:08:05.967 --rc genhtml_legend=1 00:08:05.967 --rc geninfo_all_blocks=1 00:08:05.967 --rc geninfo_unexecuted_blocks=1 00:08:05.967 00:08:05.967 ' 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:05.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.967 --rc genhtml_branch_coverage=1 00:08:05.967 --rc genhtml_function_coverage=1 00:08:05.967 --rc genhtml_legend=1 00:08:05.967 --rc geninfo_all_blocks=1 00:08:05.967 --rc geninfo_unexecuted_blocks=1 00:08:05.967 00:08:05.967 ' 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:05.967 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:05.967 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:05.968 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:05.968 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:05.968 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:05.968 Cannot find device "nvmf_init_br" 00:08:05.968 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:08:05.968 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:05.968 Cannot find device "nvmf_init_br2" 00:08:05.968 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:08:05.968 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:05.968 Cannot find device "nvmf_tgt_br" 00:08:05.968 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:08:05.968 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:05.968 Cannot find device "nvmf_tgt_br2" 00:08:05.968 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:08:05.968 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:05.968 Cannot find device "nvmf_init_br" 00:08:05.968 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:08:05.968 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:05.968 Cannot find device "nvmf_init_br2" 00:08:05.968 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:08:05.968 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:05.968 Cannot find device "nvmf_tgt_br" 00:08:05.968 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:08:05.968 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:05.968 Cannot find device "nvmf_tgt_br2" 00:08:05.968 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:08:05.968 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:06.227 Cannot find device "nvmf_br" 00:08:06.227 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:08:06.227 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:06.227 Cannot find device "nvmf_init_if" 00:08:06.227 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:08:06.227 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:06.227 Cannot find device "nvmf_init_if2" 00:08:06.227 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:08:06.227 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:06.227 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:06.227 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:08:06.227 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:06.227 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:06.227 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:08:06.227 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:06.227 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:06.227 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:06.227 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:06.227 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:06.227 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:06.227 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:06.227 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:06.227 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:06.227 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:06.227 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:06.227 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:06.227 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:06.227 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:06.227 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:06.227 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:06.227 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:06.227 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:06.227 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:06.227 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:06.227 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:06.227 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:06.227 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:06.227 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:06.227 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:06.227 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:06.486 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:06.486 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:06.486 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:06.486 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:06.486 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:06.486 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:06.486 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:06.486 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:06.486 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:08:06.486 00:08:06.486 --- 10.0.0.3 ping statistics --- 00:08:06.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:06.486 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:08:06.486 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:06.486 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:06.486 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.070 ms 00:08:06.486 00:08:06.486 --- 10.0.0.4 ping statistics --- 00:08:06.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:06.486 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:08:06.486 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:06.486 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:06.486 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:08:06.486 00:08:06.486 --- 10.0.0.1 ping statistics --- 00:08:06.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:06.486 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:08:06.486 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:06.486 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:06.486 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:08:06.486 00:08:06.486 --- 10.0.0.2 ping statistics --- 00:08:06.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:06.487 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:08:06.487 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:06.487 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:08:06.487 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:06.487 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:06.487 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:06.487 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:06.487 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:06.487 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:06.487 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:06.487 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:06.487 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:06.487 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:06.487 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:06.487 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=64323 00:08:06.487 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:06.487 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 64323 00:08:06.487 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 64323 ']' 00:08:06.487 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.487 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:06.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.487 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.487 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:06.487 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:06.487 [2024-11-15 10:26:07.200648] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:08:06.487 [2024-11-15 10:26:07.200758] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:06.747 [2024-11-15 10:26:07.358827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.747 [2024-11-15 10:26:07.424518] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:06.747 [2024-11-15 10:26:07.424892] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:06.747 [2024-11-15 10:26:07.424930] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:06.747 [2024-11-15 10:26:07.424942] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:06.747 [2024-11-15 10:26:07.424951] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:06.747 [2024-11-15 10:26:07.425479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:06.747 [2024-11-15 10:26:07.483986] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:06.747 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:06.747 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:08:06.747 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:06.747 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:06.747 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:07.006 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:07.006 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:07.006 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.006 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:07.006 [2024-11-15 10:26:07.608377] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:07.006 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.006 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:07.006 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.006 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:07.006 Malloc0 00:08:07.006 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.006 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:07.006 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.006 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:07.006 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.006 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:07.006 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.006 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:07.006 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.006 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:07.006 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.006 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:07.006 [2024-11-15 10:26:07.665129] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:07.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:07.006 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.006 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=64347 00:08:07.006 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:07.006 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:07.006 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 64347 /var/tmp/bdevperf.sock 00:08:07.006 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 64347 ']' 00:08:07.007 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:07.007 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:07.007 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:07.007 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:07.007 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:07.007 [2024-11-15 10:26:07.728763] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:08:07.007 [2024-11-15 10:26:07.729239] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64347 ] 00:08:07.266 [2024-11-15 10:26:07.881937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.266 [2024-11-15 10:26:07.952924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.266 [2024-11-15 10:26:08.008716] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:07.266 10:26:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:07.266 10:26:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:08:07.266 10:26:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:07.266 10:26:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.266 10:26:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:07.525 NVMe0n1 00:08:07.525 10:26:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.525 10:26:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:07.525 Running I/O for 10 seconds... 00:08:09.841 6210.00 IOPS, 24.26 MiB/s [2024-11-15T10:26:11.631Z] 7171.00 IOPS, 28.01 MiB/s [2024-11-15T10:26:12.595Z] 7511.33 IOPS, 29.34 MiB/s [2024-11-15T10:26:13.532Z] 7533.75 IOPS, 29.43 MiB/s [2024-11-15T10:26:14.468Z] 7583.20 IOPS, 29.62 MiB/s [2024-11-15T10:26:15.402Z] 7562.33 IOPS, 29.54 MiB/s [2024-11-15T10:26:16.337Z] 7611.00 IOPS, 29.73 MiB/s [2024-11-15T10:26:17.713Z] 7607.62 IOPS, 29.72 MiB/s [2024-11-15T10:26:18.651Z] 7605.33 IOPS, 29.71 MiB/s [2024-11-15T10:26:18.651Z] 7588.70 IOPS, 29.64 MiB/s 00:08:17.798 Latency(us) 00:08:17.798 [2024-11-15T10:26:18.651Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:17.798 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:17.798 Verification LBA range: start 0x0 length 0x4000 00:08:17.798 NVMe0n1 : 10.08 7629.17 29.80 0.00 0.00 133591.24 17754.30 99138.09 00:08:17.798 [2024-11-15T10:26:18.651Z] =================================================================================================================== 00:08:17.798 [2024-11-15T10:26:18.651Z] Total : 7629.17 29.80 0.00 0.00 133591.24 17754.30 99138.09 00:08:17.798 { 00:08:17.798 "results": [ 00:08:17.798 { 00:08:17.798 "job": "NVMe0n1", 00:08:17.798 "core_mask": "0x1", 00:08:17.798 "workload": "verify", 00:08:17.798 "status": "finished", 00:08:17.798 "verify_range": { 00:08:17.798 "start": 0, 00:08:17.798 "length": 16384 00:08:17.798 }, 00:08:17.798 "queue_depth": 1024, 00:08:17.798 "io_size": 4096, 00:08:17.798 "runtime": 10.076975, 00:08:17.798 "iops": 7629.174429826411, 00:08:17.798 "mibps": 29.80146261650942, 00:08:17.798 "io_failed": 0, 00:08:17.798 "io_timeout": 0, 00:08:17.798 "avg_latency_us": 133591.24364506683, 00:08:17.798 "min_latency_us": 17754.298181818183, 00:08:17.798 "max_latency_us": 99138.09454545454 00:08:17.798 } 00:08:17.798 ], 00:08:17.798 "core_count": 1 00:08:17.798 } 00:08:17.798 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 64347 00:08:17.798 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 64347 ']' 00:08:17.798 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 64347 00:08:17.798 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:08:17.798 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:17.798 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 64347 00:08:17.798 killing process with pid 64347 00:08:17.798 Received shutdown signal, test time was about 10.000000 seconds 00:08:17.798 00:08:17.798 Latency(us) 00:08:17.798 [2024-11-15T10:26:18.651Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:17.798 [2024-11-15T10:26:18.651Z] =================================================================================================================== 00:08:17.798 [2024-11-15T10:26:18.651Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:17.798 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:17.798 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:17.798 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 64347' 00:08:17.798 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 64347 00:08:17.798 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 64347 00:08:17.798 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:17.798 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:17.798 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:17.798 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:18.057 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:18.057 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:18.057 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:18.057 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:18.057 rmmod nvme_tcp 00:08:18.057 rmmod nvme_fabrics 00:08:18.057 rmmod nvme_keyring 00:08:18.057 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:18.057 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:18.057 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:18.057 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 64323 ']' 00:08:18.057 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 64323 00:08:18.057 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 64323 ']' 00:08:18.057 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 64323 00:08:18.057 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:08:18.057 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:18.057 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 64323 00:08:18.057 killing process with pid 64323 00:08:18.057 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:08:18.057 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:08:18.057 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 64323' 00:08:18.057 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 64323 00:08:18.057 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 64323 00:08:18.315 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:18.315 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:18.315 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:18.315 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:18.315 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:08:18.315 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:08:18.315 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:18.315 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:18.315 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:18.315 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:18.315 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:18.315 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:18.315 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:18.315 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:18.315 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:18.315 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:18.315 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:18.315 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:18.574 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:18.574 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:18.574 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:18.574 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:18.574 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:18.574 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:18.574 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:18.574 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:18.574 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:08:18.574 00:08:18.574 real 0m12.774s 00:08:18.574 user 0m21.596s 00:08:18.574 sys 0m2.283s 00:08:18.574 ************************************ 00:08:18.574 END TEST nvmf_queue_depth 00:08:18.574 ************************************ 00:08:18.574 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:18.574 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:18.574 10:26:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:18.574 10:26:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:18.574 10:26:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:18.574 10:26:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:18.574 ************************************ 00:08:18.574 START TEST nvmf_target_multipath 00:08:18.574 ************************************ 00:08:18.574 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:18.574 * Looking for test storage... 00:08:18.574 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:18.574 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:18.574 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:08:18.574 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:18.834 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:18.834 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:18.834 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:18.834 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:18.834 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:18.834 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:18.834 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:18.834 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:18.834 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:18.834 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:18.834 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:18.834 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:18.834 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:18.834 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:18.834 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:18.834 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:18.834 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:18.834 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:18.834 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:18.834 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:18.834 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:18.834 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:18.834 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:18.834 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:18.834 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:18.834 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:18.834 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:18.834 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:18.834 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:18.834 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:18.834 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:18.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.834 --rc genhtml_branch_coverage=1 00:08:18.834 --rc genhtml_function_coverage=1 00:08:18.834 --rc genhtml_legend=1 00:08:18.834 --rc geninfo_all_blocks=1 00:08:18.834 --rc geninfo_unexecuted_blocks=1 00:08:18.834 00:08:18.834 ' 00:08:18.834 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:18.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.834 --rc genhtml_branch_coverage=1 00:08:18.834 --rc genhtml_function_coverage=1 00:08:18.834 --rc genhtml_legend=1 00:08:18.834 --rc geninfo_all_blocks=1 00:08:18.834 --rc geninfo_unexecuted_blocks=1 00:08:18.834 00:08:18.834 ' 00:08:18.834 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:18.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.834 --rc genhtml_branch_coverage=1 00:08:18.834 --rc genhtml_function_coverage=1 00:08:18.834 --rc genhtml_legend=1 00:08:18.834 --rc geninfo_all_blocks=1 00:08:18.834 --rc geninfo_unexecuted_blocks=1 00:08:18.834 00:08:18.834 ' 00:08:18.834 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:18.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.834 --rc genhtml_branch_coverage=1 00:08:18.834 --rc genhtml_function_coverage=1 00:08:18.834 --rc genhtml_legend=1 00:08:18.834 --rc geninfo_all_blocks=1 00:08:18.834 --rc geninfo_unexecuted_blocks=1 00:08:18.834 00:08:18.834 ' 00:08:18.834 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:18.834 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:18.834 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:18.835 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:18.835 Cannot find device "nvmf_init_br" 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:18.835 Cannot find device "nvmf_init_br2" 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:18.835 Cannot find device "nvmf_tgt_br" 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:18.835 Cannot find device "nvmf_tgt_br2" 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:18.835 Cannot find device "nvmf_init_br" 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:18.835 Cannot find device "nvmf_init_br2" 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:18.835 Cannot find device "nvmf_tgt_br" 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:18.835 Cannot find device "nvmf_tgt_br2" 00:08:18.835 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:08:18.836 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:18.836 Cannot find device "nvmf_br" 00:08:18.836 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:08:18.836 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:19.094 Cannot find device "nvmf_init_if" 00:08:19.094 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:08:19.094 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:19.094 Cannot find device "nvmf_init_if2" 00:08:19.094 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:08:19.094 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:19.094 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:19.094 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:08:19.094 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:19.094 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:19.094 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:08:19.094 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:19.094 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:19.094 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:19.094 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:19.094 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:19.094 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:19.094 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:19.094 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:19.094 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:19.094 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:19.094 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:19.094 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:19.095 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:19.095 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:19.095 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:19.095 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:19.095 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:19.095 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:19.095 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:19.095 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:19.095 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:19.095 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:19.095 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:19.095 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:19.095 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:19.095 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:19.095 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:19.095 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:19.095 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:19.095 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:19.095 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:19.095 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:19.095 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:19.095 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:19.095 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:08:19.095 00:08:19.095 --- 10.0.0.3 ping statistics --- 00:08:19.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.095 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:08:19.095 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:19.095 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:19.095 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.071 ms 00:08:19.095 00:08:19.095 --- 10.0.0.4 ping statistics --- 00:08:19.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.095 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:08:19.095 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:19.095 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:19.095 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:08:19.095 00:08:19.095 --- 10.0.0.1 ping statistics --- 00:08:19.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.095 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:08:19.095 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:19.095 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:19.095 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.105 ms 00:08:19.095 00:08:19.095 --- 10.0.0.2 ping statistics --- 00:08:19.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.095 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:08:19.095 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:19.095 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:08:19.095 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:19.095 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:19.095 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:19.095 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:19.095 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:19.095 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:19.095 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:19.095 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:08:19.095 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:08:19.095 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:08:19.095 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:19.095 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:19.095 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:19.095 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=64721 00:08:19.095 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:19.095 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 64721 00:08:19.095 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@833 -- # '[' -z 64721 ']' 00:08:19.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.095 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.095 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:19.095 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.095 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:19.095 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:19.354 [2024-11-15 10:26:19.999397] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:08:19.354 [2024-11-15 10:26:19.999516] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:19.354 [2024-11-15 10:26:20.156952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:19.612 [2024-11-15 10:26:20.229772] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:19.612 [2024-11-15 10:26:20.230331] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:19.612 [2024-11-15 10:26:20.230648] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:19.612 [2024-11-15 10:26:20.230943] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:19.612 [2024-11-15 10:26:20.231239] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:19.612 [2024-11-15 10:26:20.232867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:19.612 [2024-11-15 10:26:20.233004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:19.612 [2024-11-15 10:26:20.233644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:19.612 [2024-11-15 10:26:20.233651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.612 [2024-11-15 10:26:20.293607] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:20.179 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:20.179 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@866 -- # return 0 00:08:20.179 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:20.179 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:20.179 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:20.436 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:20.436 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:20.695 [2024-11-15 10:26:21.344997] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:20.695 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:08:20.953 Malloc0 00:08:20.953 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:08:21.213 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:21.781 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:21.781 [2024-11-15 10:26:22.560048] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:21.781 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:08:22.352 [2024-11-15 10:26:22.912430] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:08:22.352 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid=b4733420-cf17-49bc-adb6-f89fe6fa7a33 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:08:22.352 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid=b4733420-cf17-49bc-adb6-f89fe6fa7a33 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:08:22.611 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:08:22.611 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # local i=0 00:08:22.611 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:08:22.611 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:08:22.611 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # sleep 2 00:08:24.515 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:08:24.515 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:08:24.515 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:08:24.515 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:08:24.515 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:08:24.515 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # return 0 00:08:24.515 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:08:24.515 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:08:24.515 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:08:24.515 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:24.515 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:08:24.515 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:08:24.515 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:08:24.515 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:08:24.515 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:08:24.515 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:08:24.515 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:08:24.515 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:08:24.515 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:08:24.515 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:08:24.515 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:08:24.515 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:24.515 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:24.515 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:24.515 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:24.515 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:08:24.515 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:08:24.515 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:24.515 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:24.515 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:24.515 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:24.515 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:08:24.515 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=64819 00:08:24.515 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:08:24.515 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:08:24.515 [global] 00:08:24.515 thread=1 00:08:24.515 invalidate=1 00:08:24.515 rw=randrw 00:08:24.515 time_based=1 00:08:24.515 runtime=6 00:08:24.515 ioengine=libaio 00:08:24.515 direct=1 00:08:24.515 bs=4096 00:08:24.515 iodepth=128 00:08:24.515 norandommap=0 00:08:24.515 numjobs=1 00:08:24.515 00:08:24.515 verify_dump=1 00:08:24.515 verify_backlog=512 00:08:24.515 verify_state_save=0 00:08:24.515 do_verify=1 00:08:24.515 verify=crc32c-intel 00:08:24.515 [job0] 00:08:24.515 filename=/dev/nvme0n1 00:08:24.515 Could not set queue depth (nvme0n1) 00:08:24.773 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:24.773 fio-3.35 00:08:24.773 Starting 1 thread 00:08:25.709 10:26:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:08:25.968 10:26:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:08:26.227 10:26:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:08:26.227 10:26:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:08:26.227 10:26:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:26.227 10:26:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:26.227 10:26:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:26.227 10:26:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:26.227 10:26:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:08:26.227 10:26:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:08:26.227 10:26:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:26.227 10:26:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:26.227 10:26:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:26.227 10:26:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:26.227 10:26:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:08:26.487 10:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:08:26.747 10:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:08:26.747 10:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:08:26.747 10:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:26.747 10:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:26.747 10:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:26.747 10:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:26.747 10:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:08:26.747 10:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:08:26.747 10:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:26.747 10:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:26.747 10:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:26.747 10:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:26.747 10:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 64819 00:08:30.993 00:08:30.993 job0: (groupid=0, jobs=1): err= 0: pid=64844: Fri Nov 15 10:26:31 2024 00:08:30.993 read: IOPS=10.6k, BW=41.3MiB/s (43.4MB/s)(248MiB/6006msec) 00:08:30.993 slat (usec): min=5, max=10341, avg=56.30, stdev=226.18 00:08:30.993 clat (usec): min=1700, max=17788, avg=8318.48, stdev=1449.82 00:08:30.993 lat (usec): min=1710, max=18258, avg=8374.78, stdev=1453.86 00:08:30.993 clat percentiles (usec): 00:08:30.993 | 1.00th=[ 4359], 5.00th=[ 6325], 10.00th=[ 7111], 20.00th=[ 7570], 00:08:30.993 | 30.00th=[ 7898], 40.00th=[ 8029], 50.00th=[ 8160], 60.00th=[ 8291], 00:08:30.993 | 70.00th=[ 8455], 80.00th=[ 8848], 90.00th=[ 9372], 95.00th=[11994], 00:08:30.993 | 99.00th=[13042], 99.50th=[13304], 99.90th=[13698], 99.95th=[13829], 00:08:30.993 | 99.99th=[14222] 00:08:30.993 bw ( KiB/s): min= 6096, max=28176, per=50.76%, avg=21491.64, stdev=7493.71, samples=11 00:08:30.993 iops : min= 1524, max= 7044, avg=5372.91, stdev=1873.43, samples=11 00:08:30.993 write: IOPS=6132, BW=24.0MiB/s (25.1MB/s)(126MiB/5278msec); 0 zone resets 00:08:30.993 slat (usec): min=13, max=3661, avg=64.36, stdev=156.14 00:08:30.993 clat (usec): min=2402, max=13825, avg=7204.22, stdev=1283.30 00:08:30.993 lat (usec): min=2445, max=13847, avg=7268.59, stdev=1286.85 00:08:30.993 clat percentiles (usec): 00:08:30.993 | 1.00th=[ 3359], 5.00th=[ 4228], 10.00th=[ 5604], 20.00th=[ 6718], 00:08:30.993 | 30.00th=[ 6980], 40.00th=[ 7177], 50.00th=[ 7373], 60.00th=[ 7570], 00:08:30.993 | 70.00th=[ 7767], 80.00th=[ 7963], 90.00th=[ 8225], 95.00th=[ 8455], 00:08:30.993 | 99.00th=[11207], 99.50th=[11731], 99.90th=[12780], 99.95th=[13173], 00:08:30.993 | 99.99th=[13566] 00:08:30.993 bw ( KiB/s): min= 6504, max=27600, per=87.91%, avg=21565.09, stdev=7242.57, samples=11 00:08:30.993 iops : min= 1626, max= 6900, avg=5391.27, stdev=1810.64, samples=11 00:08:30.993 lat (msec) : 2=0.01%, 4=1.60%, 10=92.48%, 20=5.92% 00:08:30.993 cpu : usr=5.55%, sys=21.55%, ctx=5605, majf=0, minf=102 00:08:30.993 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:08:30.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:30.993 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:30.993 issued rwts: total=63573,32367,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:30.993 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:30.993 00:08:30.993 Run status group 0 (all jobs): 00:08:30.993 READ: bw=41.3MiB/s (43.4MB/s), 41.3MiB/s-41.3MiB/s (43.4MB/s-43.4MB/s), io=248MiB (260MB), run=6006-6006msec 00:08:30.993 WRITE: bw=24.0MiB/s (25.1MB/s), 24.0MiB/s-24.0MiB/s (25.1MB/s-25.1MB/s), io=126MiB (133MB), run=5278-5278msec 00:08:30.993 00:08:30.993 Disk stats (read/write): 00:08:30.993 nvme0n1: ios=62653/31763, merge=0/0, ticks=499632/214728, in_queue=714360, util=98.55% 00:08:30.993 10:26:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:08:31.252 10:26:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:08:31.511 10:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:08:31.511 10:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:08:31.511 10:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:31.511 10:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:31.511 10:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:31.511 10:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:31.511 10:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:08:31.511 10:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:08:31.511 10:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:31.511 10:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:31.511 10:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:31.511 10:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:31.511 10:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:08:31.511 10:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=64926 00:08:31.511 10:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:08:31.511 10:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:08:31.511 [global] 00:08:31.511 thread=1 00:08:31.511 invalidate=1 00:08:31.511 rw=randrw 00:08:31.511 time_based=1 00:08:31.511 runtime=6 00:08:31.511 ioengine=libaio 00:08:31.511 direct=1 00:08:31.511 bs=4096 00:08:31.511 iodepth=128 00:08:31.511 norandommap=0 00:08:31.511 numjobs=1 00:08:31.511 00:08:31.511 verify_dump=1 00:08:31.511 verify_backlog=512 00:08:31.511 verify_state_save=0 00:08:31.511 do_verify=1 00:08:31.511 verify=crc32c-intel 00:08:31.511 [job0] 00:08:31.511 filename=/dev/nvme0n1 00:08:31.511 Could not set queue depth (nvme0n1) 00:08:31.769 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:31.769 fio-3.35 00:08:31.769 Starting 1 thread 00:08:32.704 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:08:32.963 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:08:33.235 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:08:33.235 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:08:33.235 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:33.235 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:33.235 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:33.235 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:33.235 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:08:33.235 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:08:33.235 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:33.235 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:33.235 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:33.235 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:33.235 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:08:33.494 10:26:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:08:33.753 10:26:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:08:33.753 10:26:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:08:33.753 10:26:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:33.753 10:26:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:33.753 10:26:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:33.753 10:26:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:33.753 10:26:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:08:33.753 10:26:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:08:33.753 10:26:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:33.753 10:26:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:33.753 10:26:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:33.753 10:26:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:33.753 10:26:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 64926 00:08:37.944 00:08:37.944 job0: (groupid=0, jobs=1): err= 0: pid=64947: Fri Nov 15 10:26:38 2024 00:08:37.944 read: IOPS=11.3k, BW=44.1MiB/s (46.3MB/s)(265MiB/6006msec) 00:08:37.944 slat (usec): min=5, max=8311, avg=43.89, stdev=195.78 00:08:37.944 clat (usec): min=346, max=17056, avg=7715.61, stdev=2005.57 00:08:37.944 lat (usec): min=368, max=17131, avg=7759.50, stdev=2021.89 00:08:37.944 clat percentiles (usec): 00:08:37.944 | 1.00th=[ 3097], 5.00th=[ 4293], 10.00th=[ 4948], 20.00th=[ 5932], 00:08:37.944 | 30.00th=[ 6980], 40.00th=[ 7701], 50.00th=[ 8029], 60.00th=[ 8291], 00:08:37.944 | 70.00th=[ 8586], 80.00th=[ 8979], 90.00th=[ 9634], 95.00th=[11338], 00:08:37.944 | 99.00th=[13304], 99.50th=[13829], 99.90th=[14615], 99.95th=[14877], 00:08:37.944 | 99.99th=[15139] 00:08:37.944 bw ( KiB/s): min=13024, max=37672, per=54.13%, avg=24452.36, stdev=7672.70, samples=11 00:08:37.944 iops : min= 3256, max= 9418, avg=6113.09, stdev=1918.17, samples=11 00:08:37.944 write: IOPS=6617, BW=25.8MiB/s (27.1MB/s)(143MiB/5520msec); 0 zone resets 00:08:37.944 slat (usec): min=12, max=2853, avg=54.81, stdev=137.44 00:08:37.944 clat (usec): min=1276, max=14630, avg=6554.21, stdev=1879.93 00:08:37.944 lat (usec): min=1336, max=14667, avg=6609.02, stdev=1895.63 00:08:37.944 clat percentiles (usec): 00:08:37.944 | 1.00th=[ 2769], 5.00th=[ 3425], 10.00th=[ 3818], 20.00th=[ 4490], 00:08:37.944 | 30.00th=[ 5211], 40.00th=[ 6652], 50.00th=[ 7177], 60.00th=[ 7504], 00:08:37.944 | 70.00th=[ 7767], 80.00th=[ 8094], 90.00th=[ 8455], 95.00th=[ 8848], 00:08:37.944 | 99.00th=[11076], 99.50th=[11863], 99.90th=[13042], 99.95th=[13566], 00:08:37.944 | 99.99th=[14091] 00:08:37.944 bw ( KiB/s): min=13184, max=36864, per=92.22%, avg=24410.91, stdev=7574.37, samples=11 00:08:37.944 iops : min= 3296, max= 9216, avg=6102.73, stdev=1893.59, samples=11 00:08:37.944 lat (usec) : 500=0.01% 00:08:37.944 lat (msec) : 2=0.14%, 4=6.55%, 10=88.11%, 20=5.20% 00:08:37.944 cpu : usr=5.65%, sys=23.83%, ctx=6040, majf=0, minf=90 00:08:37.944 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:08:37.944 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:37.944 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:37.944 issued rwts: total=67821,36526,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:37.944 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:37.944 00:08:37.944 Run status group 0 (all jobs): 00:08:37.944 READ: bw=44.1MiB/s (46.3MB/s), 44.1MiB/s-44.1MiB/s (46.3MB/s-46.3MB/s), io=265MiB (278MB), run=6006-6006msec 00:08:37.944 WRITE: bw=25.8MiB/s (27.1MB/s), 25.8MiB/s-25.8MiB/s (27.1MB/s-27.1MB/s), io=143MiB (150MB), run=5520-5520msec 00:08:37.944 00:08:37.944 Disk stats (read/write): 00:08:37.944 nvme0n1: ios=67040/36038, merge=0/0, ticks=490256/217699, in_queue=707955, util=98.58% 00:08:37.944 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:37.944 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:37.944 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:37.944 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1221 -- # local i=0 00:08:37.944 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:08:37.944 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:37.944 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:08:37.944 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:37.944 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1233 -- # return 0 00:08:37.944 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:38.203 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:08:38.203 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:08:38.203 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:08:38.203 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:08:38.204 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:38.204 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:38.204 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:38.204 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:38.204 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:38.204 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:38.204 rmmod nvme_tcp 00:08:38.204 rmmod nvme_fabrics 00:08:38.204 rmmod nvme_keyring 00:08:38.204 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:38.204 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:38.204 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:38.204 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 64721 ']' 00:08:38.204 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 64721 00:08:38.204 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@952 -- # '[' -z 64721 ']' 00:08:38.204 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # kill -0 64721 00:08:38.204 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@957 -- # uname 00:08:38.204 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:38.204 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 64721 00:08:38.204 killing process with pid 64721 00:08:38.204 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:38.204 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:38.204 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@970 -- # echo 'killing process with pid 64721' 00:08:38.204 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@971 -- # kill 64721 00:08:38.204 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@976 -- # wait 64721 00:08:38.463 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:38.463 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:38.463 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:38.463 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:38.463 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:38.463 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:38.463 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:38.463 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:38.463 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:38.463 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:38.463 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:38.739 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:38.739 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:38.739 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:38.739 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:38.739 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:38.739 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:38.739 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:38.739 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:38.739 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:38.739 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:38.739 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:38.739 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:38.739 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:38.739 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:38.739 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.739 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:08:38.739 ************************************ 00:08:38.739 END TEST nvmf_target_multipath 00:08:38.739 ************************************ 00:08:38.739 00:08:38.739 real 0m20.183s 00:08:38.739 user 1m15.745s 00:08:38.739 sys 0m9.678s 00:08:38.739 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:38.739 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:38.739 10:26:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:38.739 10:26:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:38.739 10:26:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:38.739 10:26:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:38.739 ************************************ 00:08:38.739 START TEST nvmf_zcopy 00:08:38.739 ************************************ 00:08:38.739 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:39.006 * Looking for test storage... 00:08:39.006 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:39.006 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:39.006 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:08:39.006 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:39.006 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:39.006 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:39.006 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:39.006 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:39.006 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:39.006 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:39.006 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:39.006 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:39.006 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:39.006 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:39.006 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:39.006 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:39.006 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:39.006 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:39.006 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:39.006 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:39.006 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:39.006 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:39.006 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:39.006 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:39.006 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:39.006 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:39.006 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:39.006 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:39.006 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:39.006 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:39.006 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:39.006 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:39.006 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:39.006 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:39.006 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:39.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.006 --rc genhtml_branch_coverage=1 00:08:39.006 --rc genhtml_function_coverage=1 00:08:39.006 --rc genhtml_legend=1 00:08:39.006 --rc geninfo_all_blocks=1 00:08:39.006 --rc geninfo_unexecuted_blocks=1 00:08:39.006 00:08:39.006 ' 00:08:39.006 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:39.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.006 --rc genhtml_branch_coverage=1 00:08:39.006 --rc genhtml_function_coverage=1 00:08:39.006 --rc genhtml_legend=1 00:08:39.006 --rc geninfo_all_blocks=1 00:08:39.006 --rc geninfo_unexecuted_blocks=1 00:08:39.006 00:08:39.006 ' 00:08:39.006 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:39.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.006 --rc genhtml_branch_coverage=1 00:08:39.006 --rc genhtml_function_coverage=1 00:08:39.006 --rc genhtml_legend=1 00:08:39.006 --rc geninfo_all_blocks=1 00:08:39.006 --rc geninfo_unexecuted_blocks=1 00:08:39.006 00:08:39.006 ' 00:08:39.006 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:39.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.006 --rc genhtml_branch_coverage=1 00:08:39.006 --rc genhtml_function_coverage=1 00:08:39.006 --rc genhtml_legend=1 00:08:39.006 --rc geninfo_all_blocks=1 00:08:39.006 --rc geninfo_unexecuted_blocks=1 00:08:39.006 00:08:39.006 ' 00:08:39.006 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:39.006 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:39.006 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:39.006 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:39.006 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:39.006 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:39.006 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:39.006 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:39.006 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:39.006 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:39.006 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:39.006 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:39.006 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:08:39.006 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:08:39.006 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:39.006 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:39.006 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:39.007 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:39.007 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:39.007 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:39.007 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:39.007 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:39.007 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:39.007 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.007 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.007 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.007 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:39.007 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.007 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:39.007 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:39.007 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:39.007 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:39.007 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:39.007 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:39.007 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:39.007 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:39.007 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:39.007 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:39.007 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:39.007 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:39.007 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:39.007 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:39.007 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:39.007 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:39.007 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:39.007 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:39.007 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:39.007 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:39.007 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:39.007 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:39.007 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:39.007 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:39.007 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:39.007 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:39.007 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:39.007 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:39.007 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:39.007 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:39.007 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:39.007 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:39.007 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:39.007 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:39.007 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:39.007 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:39.007 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:39.007 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:39.007 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:39.007 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:39.007 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:39.007 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:39.007 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:39.007 Cannot find device "nvmf_init_br" 00:08:39.007 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:08:39.007 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:39.007 Cannot find device "nvmf_init_br2" 00:08:39.007 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:08:39.007 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:39.007 Cannot find device "nvmf_tgt_br" 00:08:39.007 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:08:39.007 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:39.007 Cannot find device "nvmf_tgt_br2" 00:08:39.007 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:08:39.007 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:39.007 Cannot find device "nvmf_init_br" 00:08:39.007 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:08:39.007 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:39.007 Cannot find device "nvmf_init_br2" 00:08:39.007 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:08:39.007 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:39.007 Cannot find device "nvmf_tgt_br" 00:08:39.007 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:08:39.007 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:39.007 Cannot find device "nvmf_tgt_br2" 00:08:39.007 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:08:39.007 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:39.266 Cannot find device "nvmf_br" 00:08:39.266 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:08:39.266 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:39.266 Cannot find device "nvmf_init_if" 00:08:39.266 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:08:39.266 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:39.266 Cannot find device "nvmf_init_if2" 00:08:39.266 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:08:39.266 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:39.266 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:39.266 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:08:39.266 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:39.266 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:39.266 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:08:39.266 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:39.266 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:39.266 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:39.266 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:39.266 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:39.266 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:39.266 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:39.266 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:39.266 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:39.266 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:39.266 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:39.266 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:39.266 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:39.266 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:39.266 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:39.266 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:39.266 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:39.266 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:39.266 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:39.266 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:39.266 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:39.266 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:39.266 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:39.266 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:39.266 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:39.266 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:39.524 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:39.524 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:39.524 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:39.524 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:39.524 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:39.524 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:39.524 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:39.524 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:39.524 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:08:39.524 00:08:39.524 --- 10.0.0.3 ping statistics --- 00:08:39.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.524 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:08:39.524 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:39.524 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:39.524 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:08:39.524 00:08:39.524 --- 10.0.0.4 ping statistics --- 00:08:39.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.524 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:08:39.524 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:39.524 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:39.524 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:08:39.524 00:08:39.524 --- 10.0.0.1 ping statistics --- 00:08:39.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.524 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:08:39.524 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:39.524 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:39.524 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.041 ms 00:08:39.524 00:08:39.524 --- 10.0.0.2 ping statistics --- 00:08:39.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.524 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:08:39.524 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:39.524 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:08:39.524 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:39.524 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:39.524 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:39.524 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:39.524 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:39.524 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:39.524 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:39.524 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:39.524 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:39.524 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:39.524 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:39.524 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=65270 00:08:39.524 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:39.524 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 65270 00:08:39.524 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 65270 ']' 00:08:39.524 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.524 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:39.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.524 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.524 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:39.524 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:39.524 [2024-11-15 10:26:40.233502] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:08:39.524 [2024-11-15 10:26:40.233594] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:39.783 [2024-11-15 10:26:40.379791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.783 [2024-11-15 10:26:40.440067] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:39.783 [2024-11-15 10:26:40.440122] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:39.783 [2024-11-15 10:26:40.440133] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:39.783 [2024-11-15 10:26:40.440141] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:39.783 [2024-11-15 10:26:40.440148] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:39.783 [2024-11-15 10:26:40.440545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:39.784 [2024-11-15 10:26:40.492872] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:39.784 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:39.784 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:08:39.784 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:39.784 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:39.784 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:39.784 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:39.784 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:39.784 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:39.784 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.784 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:39.784 [2024-11-15 10:26:40.599485] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:39.784 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.784 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:39.784 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.784 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:39.784 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.784 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:39.784 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.784 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:39.784 [2024-11-15 10:26:40.615619] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:39.784 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.784 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:39.784 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.784 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:39.784 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.784 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:39.784 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.784 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:40.042 malloc0 00:08:40.043 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.043 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:40.043 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.043 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:40.043 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.043 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:40.043 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:40.043 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:40.043 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:40.043 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:40.043 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:40.043 { 00:08:40.043 "params": { 00:08:40.043 "name": "Nvme$subsystem", 00:08:40.043 "trtype": "$TEST_TRANSPORT", 00:08:40.043 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:40.043 "adrfam": "ipv4", 00:08:40.043 "trsvcid": "$NVMF_PORT", 00:08:40.043 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:40.043 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:40.043 "hdgst": ${hdgst:-false}, 00:08:40.043 "ddgst": ${ddgst:-false} 00:08:40.043 }, 00:08:40.043 "method": "bdev_nvme_attach_controller" 00:08:40.043 } 00:08:40.043 EOF 00:08:40.043 )") 00:08:40.043 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:40.043 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:40.043 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:40.043 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:40.043 "params": { 00:08:40.043 "name": "Nvme1", 00:08:40.043 "trtype": "tcp", 00:08:40.043 "traddr": "10.0.0.3", 00:08:40.043 "adrfam": "ipv4", 00:08:40.043 "trsvcid": "4420", 00:08:40.043 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:40.043 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:40.043 "hdgst": false, 00:08:40.043 "ddgst": false 00:08:40.043 }, 00:08:40.043 "method": "bdev_nvme_attach_controller" 00:08:40.043 }' 00:08:40.043 [2024-11-15 10:26:40.717145] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:08:40.043 [2024-11-15 10:26:40.717269] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65295 ] 00:08:40.043 [2024-11-15 10:26:40.867825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.302 [2024-11-15 10:26:40.929610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.302 [2024-11-15 10:26:40.990324] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:40.302 Running I/O for 10 seconds... 00:08:42.616 5889.00 IOPS, 46.01 MiB/s [2024-11-15T10:26:44.404Z] 5921.50 IOPS, 46.26 MiB/s [2024-11-15T10:26:45.380Z] 5933.00 IOPS, 46.35 MiB/s [2024-11-15T10:26:46.323Z] 5928.75 IOPS, 46.32 MiB/s [2024-11-15T10:26:47.259Z] 5853.20 IOPS, 45.73 MiB/s [2024-11-15T10:26:48.195Z] 5845.17 IOPS, 45.67 MiB/s [2024-11-15T10:26:49.131Z] 5837.86 IOPS, 45.61 MiB/s [2024-11-15T10:26:50.512Z] 5829.75 IOPS, 45.54 MiB/s [2024-11-15T10:26:51.447Z] 5819.78 IOPS, 45.47 MiB/s [2024-11-15T10:26:51.447Z] 5798.60 IOPS, 45.30 MiB/s 00:08:50.594 Latency(us) 00:08:50.594 [2024-11-15T10:26:51.447Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:50.594 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:50.594 Verification LBA range: start 0x0 length 0x1000 00:08:50.594 Nvme1n1 : 10.02 5801.43 45.32 0.00 0.00 21994.10 3187.43 30980.65 00:08:50.594 [2024-11-15T10:26:51.447Z] =================================================================================================================== 00:08:50.594 [2024-11-15T10:26:51.447Z] Total : 5801.43 45.32 0.00 0.00 21994.10 3187.43 30980.65 00:08:50.594 10:26:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=65417 00:08:50.594 10:26:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:50.594 10:26:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:50.594 10:26:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:50.594 10:26:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:50.594 10:26:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:50.594 10:26:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:50.594 10:26:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:50.594 10:26:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:50.594 { 00:08:50.594 "params": { 00:08:50.594 "name": "Nvme$subsystem", 00:08:50.594 "trtype": "$TEST_TRANSPORT", 00:08:50.594 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:50.594 "adrfam": "ipv4", 00:08:50.594 "trsvcid": "$NVMF_PORT", 00:08:50.594 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:50.594 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:50.594 "hdgst": ${hdgst:-false}, 00:08:50.594 "ddgst": ${ddgst:-false} 00:08:50.594 }, 00:08:50.594 "method": "bdev_nvme_attach_controller" 00:08:50.594 } 00:08:50.594 EOF 00:08:50.594 )") 00:08:50.594 10:26:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:50.594 [2024-11-15 10:26:51.337466] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.594 [2024-11-15 10:26:51.337581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.594 10:26:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:50.594 10:26:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:50.595 10:26:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:50.595 "params": { 00:08:50.595 "name": "Nvme1", 00:08:50.595 "trtype": "tcp", 00:08:50.595 "traddr": "10.0.0.3", 00:08:50.595 "adrfam": "ipv4", 00:08:50.595 "trsvcid": "4420", 00:08:50.595 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:50.595 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:50.595 "hdgst": false, 00:08:50.595 "ddgst": false 00:08:50.595 }, 00:08:50.595 "method": "bdev_nvme_attach_controller" 00:08:50.595 }' 00:08:50.595 [2024-11-15 10:26:51.345368] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.595 [2024-11-15 10:26:51.345396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.595 [2024-11-15 10:26:51.353371] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.595 [2024-11-15 10:26:51.353396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.595 [2024-11-15 10:26:51.361375] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.595 [2024-11-15 10:26:51.361400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.595 [2024-11-15 10:26:51.369375] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.595 [2024-11-15 10:26:51.369399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.595 [2024-11-15 10:26:51.377753] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:08:50.595 [2024-11-15 10:26:51.377833] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65417 ] 00:08:50.595 [2024-11-15 10:26:51.381375] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.595 [2024-11-15 10:26:51.381401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.595 [2024-11-15 10:26:51.389374] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.595 [2024-11-15 10:26:51.389397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.595 [2024-11-15 10:26:51.397374] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.595 [2024-11-15 10:26:51.397397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.595 [2024-11-15 10:26:51.405378] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.595 [2024-11-15 10:26:51.405401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.595 [2024-11-15 10:26:51.413415] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.595 [2024-11-15 10:26:51.413447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.595 [2024-11-15 10:26:51.421380] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.595 [2024-11-15 10:26:51.421403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.595 [2024-11-15 10:26:51.429412] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.595 [2024-11-15 10:26:51.429435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.595 [2024-11-15 10:26:51.441394] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.595 [2024-11-15 10:26:51.441421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.855 [2024-11-15 10:26:51.453413] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.855 [2024-11-15 10:26:51.453437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.855 [2024-11-15 10:26:51.461403] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.855 [2024-11-15 10:26:51.461432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.855 [2024-11-15 10:26:51.473402] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.855 [2024-11-15 10:26:51.473426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.855 [2024-11-15 10:26:51.481396] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.855 [2024-11-15 10:26:51.481418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.855 [2024-11-15 10:26:51.489399] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.855 [2024-11-15 10:26:51.489421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.855 [2024-11-15 10:26:51.497406] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.855 [2024-11-15 10:26:51.497429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.855 [2024-11-15 10:26:51.505410] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.855 [2024-11-15 10:26:51.505433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.855 [2024-11-15 10:26:51.513407] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.855 [2024-11-15 10:26:51.513431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.855 [2024-11-15 10:26:51.520129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.855 [2024-11-15 10:26:51.521414] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.855 [2024-11-15 10:26:51.521446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.855 [2024-11-15 10:26:51.529420] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.855 [2024-11-15 10:26:51.529450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.855 [2024-11-15 10:26:51.537466] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.855 [2024-11-15 10:26:51.537517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.855 [2024-11-15 10:26:51.545443] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.855 [2024-11-15 10:26:51.545483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.855 [2024-11-15 10:26:51.557430] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.855 [2024-11-15 10:26:51.557458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.855 [2024-11-15 10:26:51.565423] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.855 [2024-11-15 10:26:51.565445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.855 [2024-11-15 10:26:51.573424] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.855 [2024-11-15 10:26:51.573447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.855 [2024-11-15 10:26:51.581427] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.855 [2024-11-15 10:26:51.581450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.855 [2024-11-15 10:26:51.583046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.855 [2024-11-15 10:26:51.589432] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.855 [2024-11-15 10:26:51.589456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.855 [2024-11-15 10:26:51.601434] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.855 [2024-11-15 10:26:51.601460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.855 [2024-11-15 10:26:51.609432] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.855 [2024-11-15 10:26:51.609475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.855 [2024-11-15 10:26:51.617436] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.855 [2024-11-15 10:26:51.617463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.855 [2024-11-15 10:26:51.625440] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.855 [2024-11-15 10:26:51.625466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.855 [2024-11-15 10:26:51.633442] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.855 [2024-11-15 10:26:51.633469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.855 [2024-11-15 10:26:51.645359] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:50.855 [2024-11-15 10:26:51.645498] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.855 [2024-11-15 10:26:51.645541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.855 [2024-11-15 10:26:51.653482] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.855 [2024-11-15 10:26:51.653534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.855 [2024-11-15 10:26:51.665481] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.855 [2024-11-15 10:26:51.665525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.855 [2024-11-15 10:26:51.677477] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.855 [2024-11-15 10:26:51.677514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.855 [2024-11-15 10:26:51.685462] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.855 [2024-11-15 10:26:51.685489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.855 [2024-11-15 10:26:51.697496] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.855 [2024-11-15 10:26:51.697540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.855 [2024-11-15 10:26:51.705490] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.855 [2024-11-15 10:26:51.705538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.117 [2024-11-15 10:26:51.713495] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.117 [2024-11-15 10:26:51.713529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.117 [2024-11-15 10:26:51.721494] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.117 [2024-11-15 10:26:51.721526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.117 [2024-11-15 10:26:51.729534] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.117 [2024-11-15 10:26:51.729583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.117 [2024-11-15 10:26:51.741564] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.117 [2024-11-15 10:26:51.741612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.117 [2024-11-15 10:26:51.749514] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.117 [2024-11-15 10:26:51.749539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.117 [2024-11-15 10:26:51.757527] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.117 [2024-11-15 10:26:51.757575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.117 [2024-11-15 10:26:51.765526] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.117 [2024-11-15 10:26:51.765569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.117 Running I/O for 5 seconds... 00:08:51.117 [2024-11-15 10:26:51.778246] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.117 [2024-11-15 10:26:51.778280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.117 [2024-11-15 10:26:51.795374] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.117 [2024-11-15 10:26:51.795419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.117 [2024-11-15 10:26:51.812149] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.117 [2024-11-15 10:26:51.812194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.117 [2024-11-15 10:26:51.822223] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.117 [2024-11-15 10:26:51.822255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.117 [2024-11-15 10:26:51.835545] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.117 [2024-11-15 10:26:51.835627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.117 [2024-11-15 10:26:51.847589] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.117 [2024-11-15 10:26:51.847621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.117 [2024-11-15 10:26:51.863279] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.117 [2024-11-15 10:26:51.863312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.117 [2024-11-15 10:26:51.873252] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.117 [2024-11-15 10:26:51.873297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.117 [2024-11-15 10:26:51.889025] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.117 [2024-11-15 10:26:51.889072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.117 [2024-11-15 10:26:51.904746] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.117 [2024-11-15 10:26:51.904777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.117 [2024-11-15 10:26:51.914898] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.117 [2024-11-15 10:26:51.914934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.117 [2024-11-15 10:26:51.927812] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.117 [2024-11-15 10:26:51.927845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.117 [2024-11-15 10:26:51.939491] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.117 [2024-11-15 10:26:51.939519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.117 [2024-11-15 10:26:51.955148] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.117 [2024-11-15 10:26:51.955179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.376 [2024-11-15 10:26:51.971430] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.376 [2024-11-15 10:26:51.971462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.376 [2024-11-15 10:26:51.981266] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.376 [2024-11-15 10:26:51.981311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.376 [2024-11-15 10:26:51.997142] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.376 [2024-11-15 10:26:51.997185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.376 [2024-11-15 10:26:52.014793] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.376 [2024-11-15 10:26:52.014828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.376 [2024-11-15 10:26:52.030820] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.376 [2024-11-15 10:26:52.030855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.376 [2024-11-15 10:26:52.040758] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.376 [2024-11-15 10:26:52.040794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.376 [2024-11-15 10:26:52.052730] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.376 [2024-11-15 10:26:52.052761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.376 [2024-11-15 10:26:52.068402] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.376 [2024-11-15 10:26:52.068433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.376 [2024-11-15 10:26:52.078447] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.377 [2024-11-15 10:26:52.078478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.377 [2024-11-15 10:26:52.094736] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.377 [2024-11-15 10:26:52.094773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.377 [2024-11-15 10:26:52.109505] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.377 [2024-11-15 10:26:52.109536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.377 [2024-11-15 10:26:52.119487] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.377 [2024-11-15 10:26:52.119517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.377 [2024-11-15 10:26:52.133825] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.377 [2024-11-15 10:26:52.133858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.377 [2024-11-15 10:26:52.150630] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.377 [2024-11-15 10:26:52.150665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.377 [2024-11-15 10:26:52.167299] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.377 [2024-11-15 10:26:52.167328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.377 [2024-11-15 10:26:52.184133] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.377 [2024-11-15 10:26:52.184163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.377 [2024-11-15 10:26:52.194030] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.377 [2024-11-15 10:26:52.194072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.377 [2024-11-15 10:26:52.208270] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.377 [2024-11-15 10:26:52.208301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.377 [2024-11-15 10:26:52.217686] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.377 [2024-11-15 10:26:52.217717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.636 [2024-11-15 10:26:52.233569] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.636 [2024-11-15 10:26:52.233601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.636 [2024-11-15 10:26:52.249195] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.636 [2024-11-15 10:26:52.249257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.636 [2024-11-15 10:26:52.258823] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.636 [2024-11-15 10:26:52.258859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.636 [2024-11-15 10:26:52.273637] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.636 [2024-11-15 10:26:52.273669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.636 [2024-11-15 10:26:52.290796] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.636 [2024-11-15 10:26:52.290828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.636 [2024-11-15 10:26:52.306623] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.636 [2024-11-15 10:26:52.306655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.636 [2024-11-15 10:26:52.315955] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.636 [2024-11-15 10:26:52.315986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.636 [2024-11-15 10:26:52.329813] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.636 [2024-11-15 10:26:52.329849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.636 [2024-11-15 10:26:52.344253] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.636 [2024-11-15 10:26:52.344285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.636 [2024-11-15 10:26:52.353876] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.636 [2024-11-15 10:26:52.353926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.636 [2024-11-15 10:26:52.368626] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.636 [2024-11-15 10:26:52.368688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.636 [2024-11-15 10:26:52.384321] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.636 [2024-11-15 10:26:52.384367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.636 [2024-11-15 10:26:52.394043] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.637 [2024-11-15 10:26:52.394092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.637 [2024-11-15 10:26:52.409180] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.637 [2024-11-15 10:26:52.409230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.637 [2024-11-15 10:26:52.428015] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.637 [2024-11-15 10:26:52.428086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.637 [2024-11-15 10:26:52.443304] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.637 [2024-11-15 10:26:52.443376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.637 [2024-11-15 10:26:52.453363] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.637 [2024-11-15 10:26:52.453429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.637 [2024-11-15 10:26:52.469003] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.637 [2024-11-15 10:26:52.469067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.637 [2024-11-15 10:26:52.486181] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.637 [2024-11-15 10:26:52.486218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.896 [2024-11-15 10:26:52.503117] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.896 [2024-11-15 10:26:52.503153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.896 [2024-11-15 10:26:52.519996] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.896 [2024-11-15 10:26:52.520035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.896 [2024-11-15 10:26:52.537768] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.896 [2024-11-15 10:26:52.537808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.896 [2024-11-15 10:26:52.547927] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.896 [2024-11-15 10:26:52.547962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.896 [2024-11-15 10:26:52.563024] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.896 [2024-11-15 10:26:52.563090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.896 [2024-11-15 10:26:52.578278] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.896 [2024-11-15 10:26:52.578343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.896 [2024-11-15 10:26:52.588176] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.896 [2024-11-15 10:26:52.588260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.896 [2024-11-15 10:26:52.603795] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.896 [2024-11-15 10:26:52.603835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.896 [2024-11-15 10:26:52.620959] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.896 [2024-11-15 10:26:52.620992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.896 [2024-11-15 10:26:52.636996] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.896 [2024-11-15 10:26:52.637026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.896 [2024-11-15 10:26:52.646510] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.896 [2024-11-15 10:26:52.646540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.896 [2024-11-15 10:26:52.659097] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.896 [2024-11-15 10:26:52.659125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.896 [2024-11-15 10:26:52.669345] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.896 [2024-11-15 10:26:52.669394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.896 [2024-11-15 10:26:52.680468] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.896 [2024-11-15 10:26:52.680501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.896 [2024-11-15 10:26:52.693655] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.896 [2024-11-15 10:26:52.693685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.896 [2024-11-15 10:26:52.703422] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.896 [2024-11-15 10:26:52.703451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.896 [2024-11-15 10:26:52.717952] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.896 [2024-11-15 10:26:52.717985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.896 [2024-11-15 10:26:52.733600] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.896 [2024-11-15 10:26:52.733630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.896 [2024-11-15 10:26:52.744004] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.896 [2024-11-15 10:26:52.744037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.155 [2024-11-15 10:26:52.756394] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.155 [2024-11-15 10:26:52.756424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.155 [2024-11-15 10:26:52.767048] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.155 [2024-11-15 10:26:52.767091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.155 11300.00 IOPS, 88.28 MiB/s [2024-11-15T10:26:53.008Z] [2024-11-15 10:26:52.778661] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.155 [2024-11-15 10:26:52.778694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.155 [2024-11-15 10:26:52.790376] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.155 [2024-11-15 10:26:52.790422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.156 [2024-11-15 10:26:52.805510] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.156 [2024-11-15 10:26:52.805564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.156 [2024-11-15 10:26:52.820611] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.156 [2024-11-15 10:26:52.820646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.156 [2024-11-15 10:26:52.830702] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.156 [2024-11-15 10:26:52.830736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.156 [2024-11-15 10:26:52.843527] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.156 [2024-11-15 10:26:52.843593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.156 [2024-11-15 10:26:52.854864] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.156 [2024-11-15 10:26:52.854896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.156 [2024-11-15 10:26:52.871898] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.156 [2024-11-15 10:26:52.871935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.156 [2024-11-15 10:26:52.888182] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.156 [2024-11-15 10:26:52.888228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.156 [2024-11-15 10:26:52.897308] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.156 [2024-11-15 10:26:52.897338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.156 [2024-11-15 10:26:52.909774] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.156 [2024-11-15 10:26:52.909807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.156 [2024-11-15 10:26:52.921252] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.156 [2024-11-15 10:26:52.921289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.156 [2024-11-15 10:26:52.932367] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.156 [2024-11-15 10:26:52.932398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.156 [2024-11-15 10:26:52.947659] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.156 [2024-11-15 10:26:52.947693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.156 [2024-11-15 10:26:52.963941] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.156 [2024-11-15 10:26:52.963975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.156 [2024-11-15 10:26:52.974488] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.156 [2024-11-15 10:26:52.974522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.156 [2024-11-15 10:26:52.986275] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.156 [2024-11-15 10:26:52.986304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.156 [2024-11-15 10:26:52.997611] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.156 [2024-11-15 10:26:52.997642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.415 [2024-11-15 10:26:53.013314] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.415 [2024-11-15 10:26:53.013349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.415 [2024-11-15 10:26:53.023530] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.415 [2024-11-15 10:26:53.023595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.415 [2024-11-15 10:26:53.039146] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.415 [2024-11-15 10:26:53.039200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.415 [2024-11-15 10:26:53.055092] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.415 [2024-11-15 10:26:53.055406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.415 [2024-11-15 10:26:53.065619] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.415 [2024-11-15 10:26:53.065762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.415 [2024-11-15 10:26:53.081743] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.415 [2024-11-15 10:26:53.081875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.415 [2024-11-15 10:26:53.097631] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.415 [2024-11-15 10:26:53.097881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.415 [2024-11-15 10:26:53.108521] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.415 [2024-11-15 10:26:53.108679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.415 [2024-11-15 10:26:53.121305] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.415 [2024-11-15 10:26:53.121424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.415 [2024-11-15 10:26:53.136948] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.415 [2024-11-15 10:26:53.137182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.415 [2024-11-15 10:26:53.153164] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.415 [2024-11-15 10:26:53.153353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.415 [2024-11-15 10:26:53.163763] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.415 [2024-11-15 10:26:53.163911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.415 [2024-11-15 10:26:53.178827] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.415 [2024-11-15 10:26:53.178974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.415 [2024-11-15 10:26:53.195540] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.415 [2024-11-15 10:26:53.195706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.415 [2024-11-15 10:26:53.212314] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.415 [2024-11-15 10:26:53.212427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.415 [2024-11-15 10:26:53.222771] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.415 [2024-11-15 10:26:53.222903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.415 [2024-11-15 10:26:53.235565] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.415 [2024-11-15 10:26:53.235705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.415 [2024-11-15 10:26:53.250821] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.415 [2024-11-15 10:26:53.250932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.675 [2024-11-15 10:26:53.267688] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.675 [2024-11-15 10:26:53.267918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.675 [2024-11-15 10:26:53.277855] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.675 [2024-11-15 10:26:53.278003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.675 [2024-11-15 10:26:53.290700] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.675 [2024-11-15 10:26:53.290828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.675 [2024-11-15 10:26:53.302490] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.675 [2024-11-15 10:26:53.302623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.675 [2024-11-15 10:26:53.318364] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.675 [2024-11-15 10:26:53.318535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.675 [2024-11-15 10:26:53.329522] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.675 [2024-11-15 10:26:53.329685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.675 [2024-11-15 10:26:53.344479] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.675 [2024-11-15 10:26:53.344923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.675 [2024-11-15 10:26:53.361348] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.675 [2024-11-15 10:26:53.361382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.675 [2024-11-15 10:26:53.378516] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.675 [2024-11-15 10:26:53.378568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.675 [2024-11-15 10:26:53.388865] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.675 [2024-11-15 10:26:53.388912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.675 [2024-11-15 10:26:53.401218] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.675 [2024-11-15 10:26:53.401279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.675 [2024-11-15 10:26:53.416334] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.675 [2024-11-15 10:26:53.416369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.675 [2024-11-15 10:26:53.434983] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.675 [2024-11-15 10:26:53.435026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.675 [2024-11-15 10:26:53.449104] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.675 [2024-11-15 10:26:53.449150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.675 [2024-11-15 10:26:53.465563] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.675 [2024-11-15 10:26:53.465616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.675 [2024-11-15 10:26:53.475540] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.675 [2024-11-15 10:26:53.475590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.675 [2024-11-15 10:26:53.488408] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.675 [2024-11-15 10:26:53.488440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.675 [2024-11-15 10:26:53.502721] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.675 [2024-11-15 10:26:53.502758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.675 [2024-11-15 10:26:53.518133] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.675 [2024-11-15 10:26:53.518168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.947 [2024-11-15 10:26:53.527901] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.948 [2024-11-15 10:26:53.527933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.948 [2024-11-15 10:26:53.544893] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.948 [2024-11-15 10:26:53.544929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.948 [2024-11-15 10:26:53.560309] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.948 [2024-11-15 10:26:53.560343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.948 [2024-11-15 10:26:53.570204] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.948 [2024-11-15 10:26:53.570236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.948 [2024-11-15 10:26:53.582168] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.948 [2024-11-15 10:26:53.582200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.948 [2024-11-15 10:26:53.592646] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.948 [2024-11-15 10:26:53.592678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.948 [2024-11-15 10:26:53.603569] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.948 [2024-11-15 10:26:53.603601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.948 [2024-11-15 10:26:53.619474] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.948 [2024-11-15 10:26:53.619506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.948 [2024-11-15 10:26:53.635862] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.948 [2024-11-15 10:26:53.635893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.948 [2024-11-15 10:26:53.646334] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.948 [2024-11-15 10:26:53.646362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.948 [2024-11-15 10:26:53.658824] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.948 [2024-11-15 10:26:53.658855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.948 [2024-11-15 10:26:53.674393] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.948 [2024-11-15 10:26:53.674424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.948 [2024-11-15 10:26:53.690621] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.948 [2024-11-15 10:26:53.690655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.948 [2024-11-15 10:26:53.700768] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.948 [2024-11-15 10:26:53.700799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.948 [2024-11-15 10:26:53.713158] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.948 [2024-11-15 10:26:53.713188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.948 [2024-11-15 10:26:53.727381] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.948 [2024-11-15 10:26:53.727412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.948 [2024-11-15 10:26:53.743145] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.948 [2024-11-15 10:26:53.743179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.948 [2024-11-15 10:26:53.752854] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.948 [2024-11-15 10:26:53.752887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.948 [2024-11-15 10:26:53.767845] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.948 [2024-11-15 10:26:53.767878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.948 11143.50 IOPS, 87.06 MiB/s [2024-11-15T10:26:53.801Z] [2024-11-15 10:26:53.778363] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.948 [2024-11-15 10:26:53.778396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.948 [2024-11-15 10:26:53.790061] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.948 [2024-11-15 10:26:53.790106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.207 [2024-11-15 10:26:53.805664] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.207 [2024-11-15 10:26:53.805706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.207 [2024-11-15 10:26:53.815794] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.207 [2024-11-15 10:26:53.815829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.207 [2024-11-15 10:26:53.831231] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.207 [2024-11-15 10:26:53.831292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.207 [2024-11-15 10:26:53.841988] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.207 [2024-11-15 10:26:53.842024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.207 [2024-11-15 10:26:53.854022] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.207 [2024-11-15 10:26:53.854097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.207 [2024-11-15 10:26:53.864600] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.207 [2024-11-15 10:26:53.864632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.207 [2024-11-15 10:26:53.875691] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.207 [2024-11-15 10:26:53.875748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.207 [2024-11-15 10:26:53.891118] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.207 [2024-11-15 10:26:53.891181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.207 [2024-11-15 10:26:53.907231] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.207 [2024-11-15 10:26:53.907277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.207 [2024-11-15 10:26:53.917129] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.207 [2024-11-15 10:26:53.917181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.207 [2024-11-15 10:26:53.933359] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.207 [2024-11-15 10:26:53.933398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.207 [2024-11-15 10:26:53.949372] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.207 [2024-11-15 10:26:53.949409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.207 [2024-11-15 10:26:53.967694] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.207 [2024-11-15 10:26:53.967762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.207 [2024-11-15 10:26:53.981546] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.207 [2024-11-15 10:26:53.981603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.207 [2024-11-15 10:26:53.996940] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.207 [2024-11-15 10:26:53.996978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.207 [2024-11-15 10:26:54.007318] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.207 [2024-11-15 10:26:54.007352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.207 [2024-11-15 10:26:54.019862] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.207 [2024-11-15 10:26:54.019898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.207 [2024-11-15 10:26:54.035356] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.207 [2024-11-15 10:26:54.035392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.207 [2024-11-15 10:26:54.052722] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.207 [2024-11-15 10:26:54.052761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.467 [2024-11-15 10:26:54.063149] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.467 [2024-11-15 10:26:54.063184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.467 [2024-11-15 10:26:54.075771] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.467 [2024-11-15 10:26:54.075819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.467 [2024-11-15 10:26:54.087044] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.467 [2024-11-15 10:26:54.087087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.467 [2024-11-15 10:26:54.102666] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.467 [2024-11-15 10:26:54.102709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.467 [2024-11-15 10:26:54.112529] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.467 [2024-11-15 10:26:54.112593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.467 [2024-11-15 10:26:54.124913] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.467 [2024-11-15 10:26:54.124964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.467 [2024-11-15 10:26:54.136537] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.467 [2024-11-15 10:26:54.136585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.467 [2024-11-15 10:26:54.152577] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.467 [2024-11-15 10:26:54.152613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.467 [2024-11-15 10:26:54.162211] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.467 [2024-11-15 10:26:54.162241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.467 [2024-11-15 10:26:54.174087] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.467 [2024-11-15 10:26:54.174146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.467 [2024-11-15 10:26:54.188819] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.467 [2024-11-15 10:26:54.188858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.467 [2024-11-15 10:26:54.199170] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.467 [2024-11-15 10:26:54.199201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.467 [2024-11-15 10:26:54.214320] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.467 [2024-11-15 10:26:54.214363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.467 [2024-11-15 10:26:54.232437] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.467 [2024-11-15 10:26:54.232473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.467 [2024-11-15 10:26:54.246897] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.467 [2024-11-15 10:26:54.246952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.467 [2024-11-15 10:26:54.262718] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.467 [2024-11-15 10:26:54.262756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.467 [2024-11-15 10:26:54.281534] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.467 [2024-11-15 10:26:54.281591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.467 [2024-11-15 10:26:54.292002] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.467 [2024-11-15 10:26:54.292041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.467 [2024-11-15 10:26:54.307125] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.467 [2024-11-15 10:26:54.307206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.727 [2024-11-15 10:26:54.322486] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.727 [2024-11-15 10:26:54.322521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.727 [2024-11-15 10:26:54.332565] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.727 [2024-11-15 10:26:54.332608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.727 [2024-11-15 10:26:54.348327] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.727 [2024-11-15 10:26:54.348376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.727 [2024-11-15 10:26:54.359845] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.727 [2024-11-15 10:26:54.359884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.727 [2024-11-15 10:26:54.376575] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.727 [2024-11-15 10:26:54.376622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.727 [2024-11-15 10:26:54.392212] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.727 [2024-11-15 10:26:54.392268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.727 [2024-11-15 10:26:54.402662] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.727 [2024-11-15 10:26:54.402695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.727 [2024-11-15 10:26:54.414680] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.727 [2024-11-15 10:26:54.414715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.727 [2024-11-15 10:26:54.430148] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.727 [2024-11-15 10:26:54.430194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.727 [2024-11-15 10:26:54.447017] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.727 [2024-11-15 10:26:54.447084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.727 [2024-11-15 10:26:54.457709] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.727 [2024-11-15 10:26:54.457741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.727 [2024-11-15 10:26:54.470061] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.727 [2024-11-15 10:26:54.470108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.727 [2024-11-15 10:26:54.485400] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.727 [2024-11-15 10:26:54.485439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.727 [2024-11-15 10:26:54.502149] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.727 [2024-11-15 10:26:54.502190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.727 [2024-11-15 10:26:54.519640] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.727 [2024-11-15 10:26:54.519687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.727 [2024-11-15 10:26:54.535760] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.727 [2024-11-15 10:26:54.535804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.727 [2024-11-15 10:26:54.545873] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.727 [2024-11-15 10:26:54.545905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.727 [2024-11-15 10:26:54.561845] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.727 [2024-11-15 10:26:54.561896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.988 [2024-11-15 10:26:54.578812] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.988 [2024-11-15 10:26:54.578852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.988 [2024-11-15 10:26:54.594134] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.988 [2024-11-15 10:26:54.594169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.988 [2024-11-15 10:26:54.604118] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.988 [2024-11-15 10:26:54.604152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.988 [2024-11-15 10:26:54.618305] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.988 [2024-11-15 10:26:54.618341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.988 [2024-11-15 10:26:54.634013] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.988 [2024-11-15 10:26:54.634076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.988 [2024-11-15 10:26:54.644181] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.988 [2024-11-15 10:26:54.644214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.988 [2024-11-15 10:26:54.659522] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.988 [2024-11-15 10:26:54.659594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.988 [2024-11-15 10:26:54.676874] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.988 [2024-11-15 10:26:54.676946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.988 [2024-11-15 10:26:54.687117] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.988 [2024-11-15 10:26:54.687153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.988 [2024-11-15 10:26:54.699537] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.988 [2024-11-15 10:26:54.699609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.988 [2024-11-15 10:26:54.710944] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.988 [2024-11-15 10:26:54.710979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.988 [2024-11-15 10:26:54.726722] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.988 [2024-11-15 10:26:54.726758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.988 [2024-11-15 10:26:54.744093] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.988 [2024-11-15 10:26:54.744136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.988 [2024-11-15 10:26:54.754949] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.988 [2024-11-15 10:26:54.754987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.988 [2024-11-15 10:26:54.767123] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.988 [2024-11-15 10:26:54.767159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.988 11125.33 IOPS, 86.92 MiB/s [2024-11-15T10:26:54.841Z] [2024-11-15 10:26:54.778795] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.988 [2024-11-15 10:26:54.778828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.988 [2024-11-15 10:26:54.794562] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.988 [2024-11-15 10:26:54.794601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.988 [2024-11-15 10:26:54.811460] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.988 [2024-11-15 10:26:54.811504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.988 [2024-11-15 10:26:54.827670] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.988 [2024-11-15 10:26:54.827723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.988 [2024-11-15 10:26:54.837436] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.988 [2024-11-15 10:26:54.837474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.247 [2024-11-15 10:26:54.850297] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.247 [2024-11-15 10:26:54.850340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.247 [2024-11-15 10:26:54.864924] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.247 [2024-11-15 10:26:54.864970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.247 [2024-11-15 10:26:54.875532] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.247 [2024-11-15 10:26:54.875583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.247 [2024-11-15 10:26:54.890393] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.247 [2024-11-15 10:26:54.890429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.247 [2024-11-15 10:26:54.906769] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.247 [2024-11-15 10:26:54.906810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.248 [2024-11-15 10:26:54.916911] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.248 [2024-11-15 10:26:54.916953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.248 [2024-11-15 10:26:54.929601] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.248 [2024-11-15 10:26:54.929643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.248 [2024-11-15 10:26:54.945021] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.248 [2024-11-15 10:26:54.945075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.248 [2024-11-15 10:26:54.961095] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.248 [2024-11-15 10:26:54.961152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.248 [2024-11-15 10:26:54.971377] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.248 [2024-11-15 10:26:54.971427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.248 [2024-11-15 10:26:54.987181] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.248 [2024-11-15 10:26:54.987227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.248 [2024-11-15 10:26:55.004787] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.248 [2024-11-15 10:26:55.004838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.248 [2024-11-15 10:26:55.015235] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.248 [2024-11-15 10:26:55.015277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.248 [2024-11-15 10:26:55.027607] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.248 [2024-11-15 10:26:55.027651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.248 [2024-11-15 10:26:55.043136] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.248 [2024-11-15 10:26:55.043177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.248 [2024-11-15 10:26:55.059329] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.248 [2024-11-15 10:26:55.059382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.248 [2024-11-15 10:26:55.070099] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.248 [2024-11-15 10:26:55.070143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.248 [2024-11-15 10:26:55.082310] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.248 [2024-11-15 10:26:55.082355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.248 [2024-11-15 10:26:55.097613] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.248 [2024-11-15 10:26:55.097660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.507 [2024-11-15 10:26:55.114848] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.507 [2024-11-15 10:26:55.114896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.507 [2024-11-15 10:26:55.125285] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.507 [2024-11-15 10:26:55.125322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.507 [2024-11-15 10:26:55.140544] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.507 [2024-11-15 10:26:55.140609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.507 [2024-11-15 10:26:55.155590] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.507 [2024-11-15 10:26:55.155633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.507 [2024-11-15 10:26:55.173689] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.507 [2024-11-15 10:26:55.173740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.507 [2024-11-15 10:26:55.185388] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.507 [2024-11-15 10:26:55.185430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.507 [2024-11-15 10:26:55.202528] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.507 [2024-11-15 10:26:55.202573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.507 [2024-11-15 10:26:55.217579] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.507 [2024-11-15 10:26:55.217618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.507 [2024-11-15 10:26:55.227503] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.507 [2024-11-15 10:26:55.227539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.507 [2024-11-15 10:26:55.240301] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.507 [2024-11-15 10:26:55.240339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.507 [2024-11-15 10:26:55.251534] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.507 [2024-11-15 10:26:55.251570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.507 [2024-11-15 10:26:55.268638] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.507 [2024-11-15 10:26:55.268684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.507 [2024-11-15 10:26:55.284894] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.507 [2024-11-15 10:26:55.284931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.507 [2024-11-15 10:26:55.303197] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.507 [2024-11-15 10:26:55.303280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.507 [2024-11-15 10:26:55.314184] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.507 [2024-11-15 10:26:55.314216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.507 [2024-11-15 10:26:55.324754] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.507 [2024-11-15 10:26:55.324790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.507 [2024-11-15 10:26:55.335722] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.507 [2024-11-15 10:26:55.335774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.507 [2024-11-15 10:26:55.351795] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.507 [2024-11-15 10:26:55.351830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.766 [2024-11-15 10:26:55.366023] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.766 [2024-11-15 10:26:55.366089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.766 [2024-11-15 10:26:55.375513] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.766 [2024-11-15 10:26:55.375546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.766 [2024-11-15 10:26:55.388466] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.766 [2024-11-15 10:26:55.388496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.766 [2024-11-15 10:26:55.403217] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.766 [2024-11-15 10:26:55.403248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.766 [2024-11-15 10:26:55.420809] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.766 [2024-11-15 10:26:55.420844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.766 [2024-11-15 10:26:55.430885] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.766 [2024-11-15 10:26:55.430946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.766 [2024-11-15 10:26:55.443072] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.766 [2024-11-15 10:26:55.443144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.766 [2024-11-15 10:26:55.454744] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.766 [2024-11-15 10:26:55.454798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.766 [2024-11-15 10:26:55.470804] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.766 [2024-11-15 10:26:55.470844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.766 [2024-11-15 10:26:55.487934] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.766 [2024-11-15 10:26:55.487969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.766 [2024-11-15 10:26:55.505280] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.766 [2024-11-15 10:26:55.505312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.766 [2024-11-15 10:26:55.520965] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.766 [2024-11-15 10:26:55.521015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.766 [2024-11-15 10:26:55.540030] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.766 [2024-11-15 10:26:55.540111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.766 [2024-11-15 10:26:55.551059] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.766 [2024-11-15 10:26:55.551105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.766 [2024-11-15 10:26:55.567566] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.766 [2024-11-15 10:26:55.567599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.766 [2024-11-15 10:26:55.585523] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.766 [2024-11-15 10:26:55.585578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.766 [2024-11-15 10:26:55.596135] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.766 [2024-11-15 10:26:55.596165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.766 [2024-11-15 10:26:55.610384] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.767 [2024-11-15 10:26:55.610419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.026 [2024-11-15 10:26:55.626701] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.026 [2024-11-15 10:26:55.626737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.026 [2024-11-15 10:26:55.637569] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.026 [2024-11-15 10:26:55.637606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.026 [2024-11-15 10:26:55.649621] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.026 [2024-11-15 10:26:55.649657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.026 [2024-11-15 10:26:55.665535] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.026 [2024-11-15 10:26:55.665589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.026 [2024-11-15 10:26:55.679276] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.026 [2024-11-15 10:26:55.679327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.026 [2024-11-15 10:26:55.689630] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.026 [2024-11-15 10:26:55.689696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.026 [2024-11-15 10:26:55.702683] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.026 [2024-11-15 10:26:55.702725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.026 [2024-11-15 10:26:55.718157] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.026 [2024-11-15 10:26:55.718192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.026 [2024-11-15 10:26:55.736381] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.026 [2024-11-15 10:26:55.736447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.026 [2024-11-15 10:26:55.752129] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.026 [2024-11-15 10:26:55.752200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.026 [2024-11-15 10:26:55.762424] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.026 [2024-11-15 10:26:55.762490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.026 11083.25 IOPS, 86.59 MiB/s [2024-11-15T10:26:55.879Z] [2024-11-15 10:26:55.777820] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.026 [2024-11-15 10:26:55.777862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.026 [2024-11-15 10:26:55.788489] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.026 [2024-11-15 10:26:55.788524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.026 [2024-11-15 10:26:55.804102] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.026 [2024-11-15 10:26:55.804135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.026 [2024-11-15 10:26:55.820312] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.026 [2024-11-15 10:26:55.820347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.026 [2024-11-15 10:26:55.830465] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.026 [2024-11-15 10:26:55.830498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.026 [2024-11-15 10:26:55.845555] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.026 [2024-11-15 10:26:55.845589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.026 [2024-11-15 10:26:55.861250] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.026 [2024-11-15 10:26:55.861307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.026 [2024-11-15 10:26:55.871663] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.026 [2024-11-15 10:26:55.871738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.286 [2024-11-15 10:26:55.884012] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.286 [2024-11-15 10:26:55.884094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.286 [2024-11-15 10:26:55.899535] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.286 [2024-11-15 10:26:55.899598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.286 [2024-11-15 10:26:55.916134] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.286 [2024-11-15 10:26:55.916177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.286 [2024-11-15 10:26:55.926266] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.286 [2024-11-15 10:26:55.926301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.286 [2024-11-15 10:26:55.938095] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.286 [2024-11-15 10:26:55.938128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.286 [2024-11-15 10:26:55.953621] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.286 [2024-11-15 10:26:55.953671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.286 [2024-11-15 10:26:55.969423] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.286 [2024-11-15 10:26:55.969468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.286 [2024-11-15 10:26:55.979480] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.286 [2024-11-15 10:26:55.979518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.286 [2024-11-15 10:26:55.992091] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.286 [2024-11-15 10:26:55.992170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.286 [2024-11-15 10:26:56.007503] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.286 [2024-11-15 10:26:56.007590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.286 [2024-11-15 10:26:56.024332] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.286 [2024-11-15 10:26:56.024369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.286 [2024-11-15 10:26:56.034678] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.286 [2024-11-15 10:26:56.034711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.286 [2024-11-15 10:26:56.049881] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.286 [2024-11-15 10:26:56.049953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.286 [2024-11-15 10:26:56.065309] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.286 [2024-11-15 10:26:56.065342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.287 [2024-11-15 10:26:56.075955] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.287 [2024-11-15 10:26:56.075987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.287 [2024-11-15 10:26:56.088274] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.287 [2024-11-15 10:26:56.088307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.287 [2024-11-15 10:26:56.103792] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.287 [2024-11-15 10:26:56.103825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.287 [2024-11-15 10:26:56.119615] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.287 [2024-11-15 10:26:56.119666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.287 [2024-11-15 10:26:56.130235] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.287 [2024-11-15 10:26:56.130321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.546 [2024-11-15 10:26:56.145260] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.546 [2024-11-15 10:26:56.145323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.546 [2024-11-15 10:26:56.156330] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.546 [2024-11-15 10:26:56.156372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.546 [2024-11-15 10:26:56.170358] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.546 [2024-11-15 10:26:56.170403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.546 [2024-11-15 10:26:56.181393] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.546 [2024-11-15 10:26:56.181433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.546 [2024-11-15 10:26:56.192902] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.546 [2024-11-15 10:26:56.192967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.546 [2024-11-15 10:26:56.208767] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.546 [2024-11-15 10:26:56.208823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.546 [2024-11-15 10:26:56.225170] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.546 [2024-11-15 10:26:56.225232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.546 [2024-11-15 10:26:56.235188] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.546 [2024-11-15 10:26:56.235238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.546 [2024-11-15 10:26:56.248133] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.546 [2024-11-15 10:26:56.248190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.546 [2024-11-15 10:26:56.264245] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.546 [2024-11-15 10:26:56.264309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.546 [2024-11-15 10:26:56.279490] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.546 [2024-11-15 10:26:56.279570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.546 [2024-11-15 10:26:56.289282] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.546 [2024-11-15 10:26:56.289327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.546 [2024-11-15 10:26:56.301282] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.546 [2024-11-15 10:26:56.301337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.546 [2024-11-15 10:26:56.316479] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.546 [2024-11-15 10:26:56.316576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.546 [2024-11-15 10:26:56.326655] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.546 [2024-11-15 10:26:56.326703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.546 [2024-11-15 10:26:56.342139] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.547 [2024-11-15 10:26:56.342178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.547 [2024-11-15 10:26:56.359000] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.547 [2024-11-15 10:26:56.359035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.547 [2024-11-15 10:26:56.369860] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.547 [2024-11-15 10:26:56.369890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.547 [2024-11-15 10:26:56.381711] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.547 [2024-11-15 10:26:56.381741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.547 [2024-11-15 10:26:56.393474] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.547 [2024-11-15 10:26:56.393526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.806 [2024-11-15 10:26:56.409938] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.806 [2024-11-15 10:26:56.409983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.806 [2024-11-15 10:26:56.425330] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.806 [2024-11-15 10:26:56.425363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.806 [2024-11-15 10:26:56.435819] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.806 [2024-11-15 10:26:56.435850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.806 [2024-11-15 10:26:56.447948] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.806 [2024-11-15 10:26:56.447978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.806 [2024-11-15 10:26:56.459700] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.806 [2024-11-15 10:26:56.459759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.806 [2024-11-15 10:26:56.471443] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.806 [2024-11-15 10:26:56.471475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.807 [2024-11-15 10:26:56.488859] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.807 [2024-11-15 10:26:56.488923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.807 [2024-11-15 10:26:56.505251] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.807 [2024-11-15 10:26:56.505299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.807 [2024-11-15 10:26:56.515678] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.807 [2024-11-15 10:26:56.515739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.807 [2024-11-15 10:26:56.531183] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.807 [2024-11-15 10:26:56.531214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.807 [2024-11-15 10:26:56.545544] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.807 [2024-11-15 10:26:56.545591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.807 [2024-11-15 10:26:56.562117] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.807 [2024-11-15 10:26:56.562167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.807 [2024-11-15 10:26:56.572654] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.807 [2024-11-15 10:26:56.572686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.807 [2024-11-15 10:26:56.587843] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.807 [2024-11-15 10:26:56.587874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.807 [2024-11-15 10:26:56.602205] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.807 [2024-11-15 10:26:56.602253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.807 [2024-11-15 10:26:56.612350] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.807 [2024-11-15 10:26:56.612380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.807 [2024-11-15 10:26:56.624430] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.807 [2024-11-15 10:26:56.624461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.807 [2024-11-15 10:26:56.639120] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.807 [2024-11-15 10:26:56.639161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.807 [2024-11-15 10:26:56.649410] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.807 [2024-11-15 10:26:56.649464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.067 [2024-11-15 10:26:56.661404] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.067 [2024-11-15 10:26:56.661454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.067 [2024-11-15 10:26:56.676932] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.067 [2024-11-15 10:26:56.676981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.067 [2024-11-15 10:26:56.692764] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.067 [2024-11-15 10:26:56.692806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.067 [2024-11-15 10:26:56.703056] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.067 [2024-11-15 10:26:56.703103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.067 [2024-11-15 10:26:56.715117] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.067 [2024-11-15 10:26:56.715149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.067 [2024-11-15 10:26:56.729842] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.067 [2024-11-15 10:26:56.729886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.067 [2024-11-15 10:26:56.746651] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.067 [2024-11-15 10:26:56.746716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.067 [2024-11-15 10:26:56.762728] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.067 [2024-11-15 10:26:56.762800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.067 11055.80 IOPS, 86.37 MiB/s [2024-11-15T10:26:56.920Z] [2024-11-15 10:26:56.772479] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.067 [2024-11-15 10:26:56.772518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.067 00:08:56.067 Latency(us) 00:08:56.067 [2024-11-15T10:26:56.920Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:56.067 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:56.067 Nvme1n1 : 5.01 11067.09 86.46 0.00 0.00 11554.33 4825.83 20137.43 00:08:56.067 [2024-11-15T10:26:56.920Z] =================================================================================================================== 00:08:56.067 [2024-11-15T10:26:56.920Z] Total : 11067.09 86.46 0.00 0.00 11554.33 4825.83 20137.43 00:08:56.067 [2024-11-15 10:26:56.782768] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.067 [2024-11-15 10:26:56.782801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.067 [2024-11-15 10:26:56.790754] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.067 [2024-11-15 10:26:56.790783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.067 [2024-11-15 10:26:56.798755] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.067 [2024-11-15 10:26:56.798780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.067 [2024-11-15 10:26:56.810757] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.067 [2024-11-15 10:26:56.810784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.067 [2024-11-15 10:26:56.818772] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.067 [2024-11-15 10:26:56.818798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.067 [2024-11-15 10:26:56.826759] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.067 [2024-11-15 10:26:56.826784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.067 [2024-11-15 10:26:56.838774] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.067 [2024-11-15 10:26:56.838800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.067 [2024-11-15 10:26:56.850781] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.067 [2024-11-15 10:26:56.850808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.067 [2024-11-15 10:26:56.862777] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.067 [2024-11-15 10:26:56.862804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.067 [2024-11-15 10:26:56.870778] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.067 [2024-11-15 10:26:56.870803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.067 [2024-11-15 10:26:56.882783] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.067 [2024-11-15 10:26:56.882808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.067 [2024-11-15 10:26:56.894794] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.067 [2024-11-15 10:26:56.894821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.067 [2024-11-15 10:26:56.906852] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.067 [2024-11-15 10:26:56.906904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.327 [2024-11-15 10:26:56.918811] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.327 [2024-11-15 10:26:56.918854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.327 [2024-11-15 10:26:56.930799] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.327 [2024-11-15 10:26:56.930824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.327 [2024-11-15 10:26:56.942801] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.327 [2024-11-15 10:26:56.942830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.327 [2024-11-15 10:26:56.954810] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.327 [2024-11-15 10:26:56.954836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.327 [2024-11-15 10:26:56.966817] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.327 [2024-11-15 10:26:56.966843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.327 [2024-11-15 10:26:56.978819] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.327 [2024-11-15 10:26:56.978847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.327 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (65417) - No such process 00:08:56.327 10:26:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 65417 00:08:56.327 10:26:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:56.327 10:26:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.327 10:26:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:56.327 10:26:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.327 10:26:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:56.327 10:26:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.327 10:26:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:56.327 delay0 00:08:56.327 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.327 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:56.327 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.327 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:56.327 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.327 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:08:56.586 [2024-11-15 10:26:57.188663] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:03.225 Initializing NVMe Controllers 00:09:03.225 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:09:03.225 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:03.225 Initialization complete. Launching workers. 00:09:03.225 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 61 00:09:03.225 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 348, failed to submit 33 00:09:03.225 success 224, unsuccessful 124, failed 0 00:09:03.225 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:03.225 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:03.225 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:03.225 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:03.225 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:03.225 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:03.225 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:03.225 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:03.225 rmmod nvme_tcp 00:09:03.225 rmmod nvme_fabrics 00:09:03.225 rmmod nvme_keyring 00:09:03.225 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:03.225 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:03.225 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:03.225 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 65270 ']' 00:09:03.225 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 65270 00:09:03.225 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 65270 ']' 00:09:03.225 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 65270 00:09:03.225 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:09:03.225 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:03.225 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65270 00:09:03.225 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:09:03.225 killing process with pid 65270 00:09:03.225 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:09:03.225 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65270' 00:09:03.225 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 65270 00:09:03.225 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 65270 00:09:03.225 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:03.225 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:03.225 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:03.225 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:03.225 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:03.225 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:03.225 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:03.225 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:03.225 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:03.225 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:03.225 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:03.225 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:03.225 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:03.225 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:03.225 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:03.225 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:03.225 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:03.225 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:03.225 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:03.225 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:03.225 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:03.225 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:03.225 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:03.225 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:03.225 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:03.225 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:03.225 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:09:03.225 ************************************ 00:09:03.225 END TEST nvmf_zcopy 00:09:03.225 ************************************ 00:09:03.225 00:09:03.225 real 0m24.268s 00:09:03.225 user 0m39.300s 00:09:03.225 sys 0m7.027s 00:09:03.225 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:03.225 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:03.225 10:27:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:03.225 10:27:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:03.225 10:27:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:03.225 10:27:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:03.225 ************************************ 00:09:03.225 START TEST nvmf_nmic 00:09:03.225 ************************************ 00:09:03.225 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:03.225 * Looking for test storage... 00:09:03.225 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:03.225 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:03.225 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:09:03.225 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:03.225 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:03.225 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:03.225 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:03.225 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:03.225 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:03.225 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:03.225 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:03.225 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:03.225 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:03.225 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:03.225 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:03.225 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:03.225 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:03.225 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:03.225 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:03.225 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:03.225 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:03.225 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:03.225 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:03.225 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:03.225 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:03.225 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:03.225 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:03.225 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:03.225 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:03.225 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:03.225 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:03.225 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:03.225 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:03.225 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:03.225 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:03.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.225 --rc genhtml_branch_coverage=1 00:09:03.225 --rc genhtml_function_coverage=1 00:09:03.225 --rc genhtml_legend=1 00:09:03.226 --rc geninfo_all_blocks=1 00:09:03.226 --rc geninfo_unexecuted_blocks=1 00:09:03.226 00:09:03.226 ' 00:09:03.226 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:03.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.226 --rc genhtml_branch_coverage=1 00:09:03.226 --rc genhtml_function_coverage=1 00:09:03.226 --rc genhtml_legend=1 00:09:03.226 --rc geninfo_all_blocks=1 00:09:03.226 --rc geninfo_unexecuted_blocks=1 00:09:03.226 00:09:03.226 ' 00:09:03.226 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:03.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.226 --rc genhtml_branch_coverage=1 00:09:03.226 --rc genhtml_function_coverage=1 00:09:03.226 --rc genhtml_legend=1 00:09:03.226 --rc geninfo_all_blocks=1 00:09:03.226 --rc geninfo_unexecuted_blocks=1 00:09:03.226 00:09:03.226 ' 00:09:03.226 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:03.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.226 --rc genhtml_branch_coverage=1 00:09:03.226 --rc genhtml_function_coverage=1 00:09:03.226 --rc genhtml_legend=1 00:09:03.226 --rc geninfo_all_blocks=1 00:09:03.226 --rc geninfo_unexecuted_blocks=1 00:09:03.226 00:09:03.226 ' 00:09:03.226 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:03.226 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:03.484 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:03.484 Cannot find device "nvmf_init_br" 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:03.484 Cannot find device "nvmf_init_br2" 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:03.484 Cannot find device "nvmf_tgt_br" 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:03.484 Cannot find device "nvmf_tgt_br2" 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:03.484 Cannot find device "nvmf_init_br" 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:03.484 Cannot find device "nvmf_init_br2" 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:03.484 Cannot find device "nvmf_tgt_br" 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:03.484 Cannot find device "nvmf_tgt_br2" 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:03.484 Cannot find device "nvmf_br" 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:09:03.484 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:03.485 Cannot find device "nvmf_init_if" 00:09:03.485 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:09:03.485 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:03.485 Cannot find device "nvmf_init_if2" 00:09:03.485 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:09:03.485 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:03.485 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:03.485 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:09:03.485 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:03.485 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:03.485 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:09:03.485 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:03.485 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:03.485 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:03.485 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:03.485 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:03.485 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:03.485 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:03.485 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:03.744 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:03.744 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:03.744 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:03.744 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:03.744 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:03.744 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:03.744 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:03.744 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:03.744 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:03.744 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:03.744 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:03.744 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:03.744 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:03.744 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:03.744 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:03.744 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:03.745 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:03.745 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:03.745 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:03.745 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:03.745 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:03.745 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:03.745 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:03.745 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:03.745 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:03.745 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:03.745 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:09:03.745 00:09:03.745 --- 10.0.0.3 ping statistics --- 00:09:03.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:03.745 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:09:03.745 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:03.745 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:03.745 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:09:03.745 00:09:03.745 --- 10.0.0.4 ping statistics --- 00:09:03.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:03.745 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:09:03.745 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:03.745 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:03.745 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:09:03.745 00:09:03.745 --- 10.0.0.1 ping statistics --- 00:09:03.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:03.745 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:09:03.745 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:03.745 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:03.745 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:09:03.745 00:09:03.745 --- 10.0.0.2 ping statistics --- 00:09:03.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:03.745 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:09:03.745 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:03.745 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:09:03.745 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:03.745 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:03.745 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:03.745 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:03.745 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:03.745 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:03.745 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:03.745 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:03.745 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:03.745 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:03.745 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:03.745 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=65793 00:09:03.745 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:03.745 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 65793 00:09:03.745 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 65793 ']' 00:09:03.745 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:03.745 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:03.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:03.745 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:03.745 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:03.745 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:03.745 [2024-11-15 10:27:04.591132] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:09:03.745 [2024-11-15 10:27:04.591241] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:04.003 [2024-11-15 10:27:04.745219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:04.003 [2024-11-15 10:27:04.816924] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:04.003 [2024-11-15 10:27:04.817001] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:04.003 [2024-11-15 10:27:04.817015] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:04.003 [2024-11-15 10:27:04.817026] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:04.003 [2024-11-15 10:27:04.817035] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:04.003 [2024-11-15 10:27:04.818285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:04.003 [2024-11-15 10:27:04.818430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:04.003 [2024-11-15 10:27:04.818634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.003 [2024-11-15 10:27:04.818495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:04.261 [2024-11-15 10:27:04.875079] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:04.830 10:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:04.830 10:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:09:04.830 10:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:04.830 10:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:04.830 10:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:04.830 10:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:04.830 10:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:04.830 10:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.830 10:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:04.830 [2024-11-15 10:27:05.622953] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:04.830 10:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.830 10:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:04.830 10:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.830 10:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:04.830 Malloc0 00:09:04.830 10:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.830 10:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:04.830 10:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.830 10:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:04.830 10:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.830 10:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:04.830 10:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.830 10:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:05.088 10:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.088 10:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:05.088 10:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.088 10:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:05.088 [2024-11-15 10:27:05.684923] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:05.088 test case1: single bdev can't be used in multiple subsystems 00:09:05.088 10:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.088 10:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:05.088 10:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:05.088 10:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.088 10:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:05.088 10:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.088 10:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:09:05.088 10:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.088 10:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:05.088 10:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.088 10:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:05.088 10:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:05.088 10:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.088 10:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:05.088 [2024-11-15 10:27:05.708758] bdev.c:8468:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:05.088 [2024-11-15 10:27:05.708802] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:05.088 [2024-11-15 10:27:05.708813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.088 request: 00:09:05.088 { 00:09:05.088 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:05.088 "namespace": { 00:09:05.088 "bdev_name": "Malloc0", 00:09:05.088 "no_auto_visible": false, 00:09:05.088 "no_metadata": false 00:09:05.088 }, 00:09:05.088 "method": "nvmf_subsystem_add_ns", 00:09:05.088 "req_id": 1 00:09:05.088 } 00:09:05.088 Got JSON-RPC error response 00:09:05.088 response: 00:09:05.088 { 00:09:05.088 "code": -32602, 00:09:05.088 "message": "Invalid parameters" 00:09:05.088 } 00:09:05.088 10:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:05.088 10:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:05.088 10:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:05.088 10:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:05.088 Adding namespace failed - expected result. 00:09:05.088 10:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:05.088 test case2: host connect to nvmf target in multiple paths 00:09:05.088 10:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:09:05.088 10:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.088 10:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:05.088 [2024-11-15 10:27:05.724924] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:09:05.088 10:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.088 10:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid=b4733420-cf17-49bc-adb6-f89fe6fa7a33 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:09:05.088 10:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid=b4733420-cf17-49bc-adb6-f89fe6fa7a33 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:09:05.346 10:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:05.346 10:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:09:05.346 10:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:09:05.346 10:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:09:05.346 10:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:09:07.250 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:09:07.250 10:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:09:07.250 10:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:09:07.250 10:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:09:07.250 10:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:09:07.251 10:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:09:07.251 10:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:07.251 [global] 00:09:07.251 thread=1 00:09:07.251 invalidate=1 00:09:07.251 rw=write 00:09:07.251 time_based=1 00:09:07.251 runtime=1 00:09:07.251 ioengine=libaio 00:09:07.251 direct=1 00:09:07.251 bs=4096 00:09:07.251 iodepth=1 00:09:07.251 norandommap=0 00:09:07.251 numjobs=1 00:09:07.251 00:09:07.251 verify_dump=1 00:09:07.251 verify_backlog=512 00:09:07.251 verify_state_save=0 00:09:07.251 do_verify=1 00:09:07.251 verify=crc32c-intel 00:09:07.251 [job0] 00:09:07.251 filename=/dev/nvme0n1 00:09:07.251 Could not set queue depth (nvme0n1) 00:09:07.509 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:07.509 fio-3.35 00:09:07.509 Starting 1 thread 00:09:08.888 00:09:08.888 job0: (groupid=0, jobs=1): err= 0: pid=65879: Fri Nov 15 10:27:09 2024 00:09:08.888 read: IOPS=2901, BW=11.3MiB/s (11.9MB/s)(11.3MiB/1001msec) 00:09:08.888 slat (nsec): min=11848, max=62672, avg=15990.75, stdev=4931.05 00:09:08.888 clat (usec): min=141, max=304, avg=183.76, stdev=18.87 00:09:08.888 lat (usec): min=155, max=316, avg=199.75, stdev=20.17 00:09:08.888 clat percentiles (usec): 00:09:08.888 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 169], 00:09:08.888 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 186], 00:09:08.888 | 70.00th=[ 192], 80.00th=[ 198], 90.00th=[ 206], 95.00th=[ 219], 00:09:08.888 | 99.00th=[ 243], 99.50th=[ 253], 99.90th=[ 281], 99.95th=[ 293], 00:09:08.888 | 99.99th=[ 306] 00:09:08.888 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:08.888 slat (usec): min=17, max=136, avg=22.56, stdev= 6.62 00:09:08.888 clat (usec): min=85, max=315, avg=110.92, stdev=13.57 00:09:08.888 lat (usec): min=106, max=442, avg=133.48, stdev=16.92 00:09:08.888 clat percentiles (usec): 00:09:08.888 | 1.00th=[ 91], 5.00th=[ 94], 10.00th=[ 97], 20.00th=[ 101], 00:09:08.888 | 30.00th=[ 104], 40.00th=[ 108], 50.00th=[ 110], 60.00th=[ 113], 00:09:08.888 | 70.00th=[ 116], 80.00th=[ 120], 90.00th=[ 127], 95.00th=[ 135], 00:09:08.888 | 99.00th=[ 151], 99.50th=[ 159], 99.90th=[ 182], 99.95th=[ 306], 00:09:08.888 | 99.99th=[ 318] 00:09:08.888 bw ( KiB/s): min=12263, max=12263, per=99.90%, avg=12263.00, stdev= 0.00, samples=1 00:09:08.888 iops : min= 3065, max= 3065, avg=3065.00, stdev= 0.00, samples=1 00:09:08.888 lat (usec) : 100=9.52%, 250=90.16%, 500=0.32% 00:09:08.888 cpu : usr=2.60%, sys=9.10%, ctx=5977, majf=0, minf=5 00:09:08.888 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:08.888 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:08.888 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:08.888 issued rwts: total=2904,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:08.888 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:08.888 00:09:08.888 Run status group 0 (all jobs): 00:09:08.888 READ: bw=11.3MiB/s (11.9MB/s), 11.3MiB/s-11.3MiB/s (11.9MB/s-11.9MB/s), io=11.3MiB (11.9MB), run=1001-1001msec 00:09:08.888 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:09:08.888 00:09:08.888 Disk stats (read/write): 00:09:08.888 nvme0n1: ios=2610/2861, merge=0/0, ticks=504/343, in_queue=847, util=91.48% 00:09:08.888 10:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:08.888 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:08.888 10:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:08.888 10:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:09:08.888 10:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:09:08.888 10:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:08.888 10:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:09:08.888 10:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:08.888 10:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:09:08.888 10:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:08.888 10:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:08.888 10:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:08.888 10:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:08.888 10:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:08.888 10:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:08.888 10:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:08.888 10:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:08.888 rmmod nvme_tcp 00:09:08.888 rmmod nvme_fabrics 00:09:08.888 rmmod nvme_keyring 00:09:08.888 10:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:08.888 10:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:08.888 10:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:08.888 10:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 65793 ']' 00:09:08.888 10:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 65793 00:09:08.888 10:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 65793 ']' 00:09:08.888 10:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 65793 00:09:08.889 10:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:09:08.889 10:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:08.889 10:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65793 00:09:08.889 killing process with pid 65793 00:09:08.889 10:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:08.889 10:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:08.889 10:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65793' 00:09:08.889 10:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 65793 00:09:08.889 10:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 65793 00:09:09.148 10:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:09.148 10:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:09.148 10:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:09.148 10:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:09.148 10:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:09:09.148 10:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:09.148 10:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:09:09.148 10:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:09.148 10:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:09.148 10:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:09.148 10:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:09.148 10:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:09.148 10:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:09.148 10:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:09.148 10:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:09.148 10:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:09.148 10:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:09.148 10:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:09.407 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:09.407 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:09.407 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:09.407 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:09.407 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:09.407 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.407 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:09.407 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.407 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:09:09.407 00:09:09.407 real 0m6.237s 00:09:09.407 user 0m19.050s 00:09:09.407 sys 0m2.366s 00:09:09.407 ************************************ 00:09:09.407 END TEST nvmf_nmic 00:09:09.407 ************************************ 00:09:09.407 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:09.407 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:09.407 10:27:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:09.407 10:27:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:09.407 10:27:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:09.407 10:27:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:09.407 ************************************ 00:09:09.407 START TEST nvmf_fio_target 00:09:09.407 ************************************ 00:09:09.407 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:09.407 * Looking for test storage... 00:09:09.667 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:09.667 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:09.667 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:09:09.667 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:09.667 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:09.667 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:09.667 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:09.667 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:09.667 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:09.667 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:09.667 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:09.667 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:09.667 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:09.667 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:09.667 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:09.667 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:09.667 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:09.667 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:09.667 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:09.667 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:09.667 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:09.667 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:09.667 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:09.667 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:09.667 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:09.668 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:09.668 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:09.668 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:09.668 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:09.668 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:09.668 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:09.668 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:09.668 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:09.668 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:09.668 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:09.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.668 --rc genhtml_branch_coverage=1 00:09:09.668 --rc genhtml_function_coverage=1 00:09:09.668 --rc genhtml_legend=1 00:09:09.668 --rc geninfo_all_blocks=1 00:09:09.668 --rc geninfo_unexecuted_blocks=1 00:09:09.668 00:09:09.668 ' 00:09:09.668 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:09.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.668 --rc genhtml_branch_coverage=1 00:09:09.668 --rc genhtml_function_coverage=1 00:09:09.668 --rc genhtml_legend=1 00:09:09.668 --rc geninfo_all_blocks=1 00:09:09.668 --rc geninfo_unexecuted_blocks=1 00:09:09.668 00:09:09.668 ' 00:09:09.668 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:09.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.668 --rc genhtml_branch_coverage=1 00:09:09.668 --rc genhtml_function_coverage=1 00:09:09.668 --rc genhtml_legend=1 00:09:09.668 --rc geninfo_all_blocks=1 00:09:09.668 --rc geninfo_unexecuted_blocks=1 00:09:09.668 00:09:09.668 ' 00:09:09.668 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:09.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.668 --rc genhtml_branch_coverage=1 00:09:09.668 --rc genhtml_function_coverage=1 00:09:09.668 --rc genhtml_legend=1 00:09:09.668 --rc geninfo_all_blocks=1 00:09:09.668 --rc geninfo_unexecuted_blocks=1 00:09:09.668 00:09:09.668 ' 00:09:09.668 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:09.668 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:09.668 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:09.668 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:09.668 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:09.668 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:09.668 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:09.668 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:09.668 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:09.668 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:09.668 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:09.668 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:09.668 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:09:09.668 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:09:09.668 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:09.668 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:09.668 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:09.668 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:09.668 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:09.668 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:09.668 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:09.668 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:09.668 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:09.668 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.668 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.668 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.668 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:09.668 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.668 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:09.668 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:09.668 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:09.668 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:09.668 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:09.668 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:09.668 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:09.668 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:09.668 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:09.668 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:09.668 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:09.668 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:09.668 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:09.668 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:09.668 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:09.668 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:09.668 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:09.668 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:09.668 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:09.668 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:09.668 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.669 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:09.669 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.669 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:09.669 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:09.669 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:09.669 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:09.669 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:09.669 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:09.669 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:09.669 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:09.669 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:09.669 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:09.669 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:09.669 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:09.669 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:09.669 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:09.669 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:09.669 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:09.669 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:09.669 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:09.669 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:09.669 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:09.669 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:09.669 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:09.669 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:09.669 Cannot find device "nvmf_init_br" 00:09:09.669 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:09:09.669 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:09.669 Cannot find device "nvmf_init_br2" 00:09:09.669 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:09:09.669 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:09.669 Cannot find device "nvmf_tgt_br" 00:09:09.669 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:09:09.669 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:09.669 Cannot find device "nvmf_tgt_br2" 00:09:09.669 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:09:09.669 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:09.669 Cannot find device "nvmf_init_br" 00:09:09.669 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:09:09.669 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:09.669 Cannot find device "nvmf_init_br2" 00:09:09.669 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:09:09.669 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:09.669 Cannot find device "nvmf_tgt_br" 00:09:09.669 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:09:09.669 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:09.669 Cannot find device "nvmf_tgt_br2" 00:09:09.669 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:09:09.669 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:09.669 Cannot find device "nvmf_br" 00:09:09.669 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:09:09.669 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:09.669 Cannot find device "nvmf_init_if" 00:09:09.669 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:09:09.669 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:09.669 Cannot find device "nvmf_init_if2" 00:09:09.669 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:09:09.669 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:09.669 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:09.669 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:09:09.669 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:09.669 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:09.669 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:09:09.669 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:09.928 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:09.928 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:09.928 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:09.928 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:09.928 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:09.928 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:09.928 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:09.928 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:09.928 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:09.928 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:09.928 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:09.928 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:09.928 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:09.928 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:09.928 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:09.928 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:09.928 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:09.928 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:09.928 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:09.928 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:09.928 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:09.929 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:09.929 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:09.929 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:09.929 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:09.929 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:09.929 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:09.929 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:09.929 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:09.929 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:09.929 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:09.929 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:09.929 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:09.929 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:09:09.929 00:09:09.929 --- 10.0.0.3 ping statistics --- 00:09:09.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.929 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:09:09.929 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:10.188 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:10.188 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.067 ms 00:09:10.188 00:09:10.188 --- 10.0.0.4 ping statistics --- 00:09:10.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.188 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:09:10.188 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:10.188 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:10.188 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:09:10.188 00:09:10.188 --- 10.0.0.1 ping statistics --- 00:09:10.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.188 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:09:10.188 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:10.188 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:10.188 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:09:10.188 00:09:10.188 --- 10.0.0.2 ping statistics --- 00:09:10.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.188 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:09:10.188 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:10.188 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:09:10.188 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:10.188 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:10.188 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:10.188 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:10.188 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:10.188 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:10.188 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:10.188 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:10.188 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:10.188 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:10.188 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:10.189 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=66119 00:09:10.189 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:10.189 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 66119 00:09:10.189 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 66119 ']' 00:09:10.189 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.189 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:10.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.189 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.189 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:10.189 10:27:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:10.189 [2024-11-15 10:27:10.900364] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:09:10.189 [2024-11-15 10:27:10.900466] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:10.448 [2024-11-15 10:27:11.059544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:10.448 [2024-11-15 10:27:11.131787] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:10.448 [2024-11-15 10:27:11.131852] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:10.448 [2024-11-15 10:27:11.131867] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:10.448 [2024-11-15 10:27:11.131877] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:10.448 [2024-11-15 10:27:11.131886] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:10.448 [2024-11-15 10:27:11.133130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:10.448 [2024-11-15 10:27:11.133215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:10.448 [2024-11-15 10:27:11.133341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:10.448 [2024-11-15 10:27:11.133348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.448 [2024-11-15 10:27:11.191592] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:11.384 10:27:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:11.384 10:27:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:09:11.384 10:27:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:11.384 10:27:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:11.384 10:27:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:11.384 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:11.384 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:11.642 [2024-11-15 10:27:12.326998] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:11.642 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:11.900 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:11.900 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:12.159 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:12.159 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:12.418 10:27:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:12.418 10:27:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:12.985 10:27:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:12.985 10:27:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:12.985 10:27:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:13.243 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:13.243 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:13.809 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:13.809 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:14.066 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:14.066 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:14.325 10:27:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:14.581 10:27:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:14.581 10:27:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:14.838 10:27:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:14.838 10:27:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:15.097 10:27:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:15.355 [2024-11-15 10:27:16.196726] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:15.613 10:27:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:15.871 10:27:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:16.130 10:27:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid=b4733420-cf17-49bc-adb6-f89fe6fa7a33 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:09:16.130 10:27:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:16.130 10:27:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:09:16.130 10:27:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:09:16.130 10:27:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:09:16.130 10:27:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:09:16.130 10:27:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:09:18.682 10:27:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:09:18.682 10:27:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:09:18.682 10:27:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:09:18.682 10:27:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:09:18.682 10:27:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:09:18.682 10:27:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:09:18.682 10:27:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:18.682 [global] 00:09:18.682 thread=1 00:09:18.682 invalidate=1 00:09:18.682 rw=write 00:09:18.682 time_based=1 00:09:18.682 runtime=1 00:09:18.682 ioengine=libaio 00:09:18.682 direct=1 00:09:18.682 bs=4096 00:09:18.682 iodepth=1 00:09:18.682 norandommap=0 00:09:18.682 numjobs=1 00:09:18.682 00:09:18.682 verify_dump=1 00:09:18.682 verify_backlog=512 00:09:18.682 verify_state_save=0 00:09:18.682 do_verify=1 00:09:18.682 verify=crc32c-intel 00:09:18.682 [job0] 00:09:18.682 filename=/dev/nvme0n1 00:09:18.682 [job1] 00:09:18.682 filename=/dev/nvme0n2 00:09:18.682 [job2] 00:09:18.682 filename=/dev/nvme0n3 00:09:18.682 [job3] 00:09:18.682 filename=/dev/nvme0n4 00:09:18.682 Could not set queue depth (nvme0n1) 00:09:18.682 Could not set queue depth (nvme0n2) 00:09:18.682 Could not set queue depth (nvme0n3) 00:09:18.682 Could not set queue depth (nvme0n4) 00:09:18.682 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:18.682 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:18.682 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:18.682 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:18.682 fio-3.35 00:09:18.682 Starting 4 threads 00:09:19.620 00:09:19.620 job0: (groupid=0, jobs=1): err= 0: pid=66311: Fri Nov 15 10:27:20 2024 00:09:19.620 read: IOPS=808, BW=3233KiB/s (3310kB/s)(3236KiB/1001msec) 00:09:19.620 slat (usec): min=18, max=241, avg=41.22, stdev=15.47 00:09:19.620 clat (usec): min=267, max=1398, avg=572.50, stdev=142.54 00:09:19.620 lat (usec): min=309, max=1435, avg=613.71, stdev=148.25 00:09:19.620 clat percentiles (usec): 00:09:19.620 | 1.00th=[ 371], 5.00th=[ 437], 10.00th=[ 453], 20.00th=[ 469], 00:09:19.620 | 30.00th=[ 482], 40.00th=[ 490], 50.00th=[ 502], 60.00th=[ 523], 00:09:19.620 | 70.00th=[ 668], 80.00th=[ 717], 90.00th=[ 766], 95.00th=[ 816], 00:09:19.620 | 99.00th=[ 979], 99.50th=[ 1012], 99.90th=[ 1401], 99.95th=[ 1401], 00:09:19.620 | 99.99th=[ 1401] 00:09:19.620 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:09:19.620 slat (usec): min=24, max=139, avg=46.45, stdev=11.30 00:09:19.620 clat (usec): min=158, max=1379, avg=437.33, stdev=139.23 00:09:19.620 lat (usec): min=192, max=1447, avg=483.78, stdev=145.16 00:09:19.620 clat percentiles (usec): 00:09:19.620 | 1.00th=[ 169], 5.00th=[ 186], 10.00th=[ 204], 20.00th=[ 379], 00:09:19.620 | 30.00th=[ 400], 40.00th=[ 416], 50.00th=[ 429], 60.00th=[ 441], 00:09:19.620 | 70.00th=[ 453], 80.00th=[ 482], 90.00th=[ 668], 95.00th=[ 701], 00:09:19.620 | 99.00th=[ 750], 99.50th=[ 758], 99.90th=[ 824], 99.95th=[ 1385], 00:09:19.620 | 99.99th=[ 1385] 00:09:19.620 bw ( KiB/s): min= 4096, max= 4096, per=17.48%, avg=4096.00, stdev= 0.00, samples=1 00:09:19.620 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:19.620 lat (usec) : 250=6.55%, 500=60.56%, 750=26.62%, 1000=5.89% 00:09:19.620 lat (msec) : 2=0.38% 00:09:19.620 cpu : usr=2.20%, sys=6.10%, ctx=1837, majf=0, minf=11 00:09:19.620 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:19.620 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:19.620 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:19.620 issued rwts: total=809,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:19.620 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:19.620 job1: (groupid=0, jobs=1): err= 0: pid=66312: Fri Nov 15 10:27:20 2024 00:09:19.620 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:09:19.620 slat (nsec): min=8859, max=74771, avg=20512.75, stdev=7151.28 00:09:19.620 clat (usec): min=318, max=840, avg=501.42, stdev=71.41 00:09:19.620 lat (usec): min=335, max=854, avg=521.94, stdev=71.57 00:09:19.620 clat percentiles (usec): 00:09:19.620 | 1.00th=[ 379], 5.00th=[ 424], 10.00th=[ 437], 20.00th=[ 457], 00:09:19.620 | 30.00th=[ 469], 40.00th=[ 482], 50.00th=[ 490], 60.00th=[ 498], 00:09:19.620 | 70.00th=[ 515], 80.00th=[ 529], 90.00th=[ 562], 95.00th=[ 652], 00:09:19.620 | 99.00th=[ 799], 99.50th=[ 807], 99.90th=[ 824], 99.95th=[ 840], 00:09:19.620 | 99.99th=[ 840] 00:09:19.620 write: IOPS=1139, BW=4559KiB/s (4669kB/s)(4564KiB/1001msec); 0 zone resets 00:09:19.620 slat (usec): min=11, max=158, avg=32.25, stdev=11.49 00:09:19.620 clat (usec): min=237, max=809, avg=370.79, stdev=60.90 00:09:19.620 lat (usec): min=263, max=851, avg=403.05, stdev=60.26 00:09:19.620 clat percentiles (usec): 00:09:19.620 | 1.00th=[ 265], 5.00th=[ 285], 10.00th=[ 297], 20.00th=[ 318], 00:09:19.620 | 30.00th=[ 334], 40.00th=[ 347], 50.00th=[ 363], 60.00th=[ 379], 00:09:19.620 | 70.00th=[ 404], 80.00th=[ 429], 90.00th=[ 453], 95.00th=[ 469], 00:09:19.620 | 99.00th=[ 519], 99.50th=[ 545], 99.90th=[ 578], 99.95th=[ 807], 00:09:19.620 | 99.99th=[ 807] 00:09:19.620 bw ( KiB/s): min= 4688, max= 4688, per=20.00%, avg=4688.00, stdev= 0.00, samples=1 00:09:19.620 iops : min= 1172, max= 1172, avg=1172.00, stdev= 0.00, samples=1 00:09:19.620 lat (usec) : 250=0.14%, 500=80.18%, 750=18.48%, 1000=1.20% 00:09:19.620 cpu : usr=1.60%, sys=4.70%, ctx=2165, majf=0, minf=5 00:09:19.620 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:19.620 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:19.620 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:19.620 issued rwts: total=1024,1141,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:19.620 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:19.620 job2: (groupid=0, jobs=1): err= 0: pid=66313: Fri Nov 15 10:27:20 2024 00:09:19.620 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:09:19.620 slat (nsec): min=11996, max=69705, avg=30422.38, stdev=8380.12 00:09:19.620 clat (usec): min=310, max=838, avg=490.47, stdev=69.97 00:09:19.620 lat (usec): min=341, max=867, avg=520.89, stdev=71.18 00:09:19.620 clat percentiles (usec): 00:09:19.620 | 1.00th=[ 371], 5.00th=[ 412], 10.00th=[ 429], 20.00th=[ 449], 00:09:19.620 | 30.00th=[ 457], 40.00th=[ 469], 50.00th=[ 478], 60.00th=[ 490], 00:09:19.620 | 70.00th=[ 502], 80.00th=[ 515], 90.00th=[ 553], 95.00th=[ 627], 00:09:19.620 | 99.00th=[ 783], 99.50th=[ 783], 99.90th=[ 807], 99.95th=[ 840], 00:09:19.620 | 99.99th=[ 840] 00:09:19.620 write: IOPS=1138, BW=4555KiB/s (4665kB/s)(4560KiB/1001msec); 0 zone resets 00:09:19.620 slat (usec): min=17, max=113, avg=37.19, stdev=11.35 00:09:19.620 clat (usec): min=215, max=869, avg=365.93, stdev=56.39 00:09:19.620 lat (usec): min=273, max=917, avg=403.12, stdev=58.74 00:09:19.620 clat percentiles (usec): 00:09:19.620 | 1.00th=[ 258], 5.00th=[ 277], 10.00th=[ 297], 20.00th=[ 318], 00:09:19.620 | 30.00th=[ 334], 40.00th=[ 351], 50.00th=[ 363], 60.00th=[ 375], 00:09:19.620 | 70.00th=[ 400], 80.00th=[ 412], 90.00th=[ 437], 95.00th=[ 453], 00:09:19.620 | 99.00th=[ 506], 99.50th=[ 529], 99.90th=[ 594], 99.95th=[ 873], 00:09:19.620 | 99.99th=[ 873] 00:09:19.620 bw ( KiB/s): min= 4672, max= 4672, per=19.93%, avg=4672.00, stdev= 0.00, samples=1 00:09:19.620 iops : min= 1168, max= 1168, avg=1168.00, stdev= 0.00, samples=1 00:09:19.620 lat (usec) : 250=0.28%, 500=84.24%, 750=14.37%, 1000=1.11% 00:09:19.620 cpu : usr=1.50%, sys=6.70%, ctx=2164, majf=0, minf=13 00:09:19.620 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:19.620 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:19.620 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:19.620 issued rwts: total=1024,1140,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:19.620 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:19.620 job3: (groupid=0, jobs=1): err= 0: pid=66314: Fri Nov 15 10:27:20 2024 00:09:19.620 read: IOPS=2339, BW=9359KiB/s (9583kB/s)(9368KiB/1001msec) 00:09:19.620 slat (usec): min=11, max=117, avg=15.25, stdev= 4.87 00:09:19.621 clat (usec): min=147, max=2674, avg=208.66, stdev=55.67 00:09:19.621 lat (usec): min=160, max=2689, avg=223.91, stdev=56.07 00:09:19.621 clat percentiles (usec): 00:09:19.621 | 1.00th=[ 161], 5.00th=[ 176], 10.00th=[ 184], 20.00th=[ 192], 00:09:19.621 | 30.00th=[ 198], 40.00th=[ 202], 50.00th=[ 208], 60.00th=[ 212], 00:09:19.621 | 70.00th=[ 219], 80.00th=[ 225], 90.00th=[ 233], 95.00th=[ 239], 00:09:19.621 | 99.00th=[ 255], 99.50th=[ 265], 99.90th=[ 570], 99.95th=[ 570], 00:09:19.621 | 99.99th=[ 2671] 00:09:19.621 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:09:19.621 slat (usec): min=14, max=145, avg=23.33, stdev= 6.82 00:09:19.621 clat (usec): min=103, max=268, avg=159.25, stdev=22.48 00:09:19.621 lat (usec): min=122, max=414, avg=182.58, stdev=24.89 00:09:19.621 clat percentiles (usec): 00:09:19.621 | 1.00th=[ 111], 5.00th=[ 121], 10.00th=[ 129], 20.00th=[ 141], 00:09:19.621 | 30.00th=[ 149], 40.00th=[ 155], 50.00th=[ 161], 60.00th=[ 165], 00:09:19.621 | 70.00th=[ 172], 80.00th=[ 178], 90.00th=[ 188], 95.00th=[ 196], 00:09:19.621 | 99.00th=[ 210], 99.50th=[ 221], 99.90th=[ 247], 99.95th=[ 262], 00:09:19.621 | 99.99th=[ 269] 00:09:19.621 bw ( KiB/s): min=10880, max=10880, per=46.42%, avg=10880.00, stdev= 0.00, samples=1 00:09:19.621 iops : min= 2720, max= 2720, avg=2720.00, stdev= 0.00, samples=1 00:09:19.621 lat (usec) : 250=99.12%, 500=0.82%, 750=0.04% 00:09:19.621 lat (msec) : 4=0.02% 00:09:19.621 cpu : usr=1.90%, sys=7.60%, ctx=4903, majf=0, minf=7 00:09:19.621 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:19.621 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:19.621 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:19.621 issued rwts: total=2342,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:19.621 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:19.621 00:09:19.621 Run status group 0 (all jobs): 00:09:19.621 READ: bw=20.3MiB/s (21.3MB/s), 3233KiB/s-9359KiB/s (3310kB/s-9583kB/s), io=20.3MiB (21.3MB), run=1001-1001msec 00:09:19.621 WRITE: bw=22.9MiB/s (24.0MB/s), 4092KiB/s-9.99MiB/s (4190kB/s-10.5MB/s), io=22.9MiB (24.0MB), run=1001-1001msec 00:09:19.621 00:09:19.621 Disk stats (read/write): 00:09:19.621 nvme0n1: ios=629/1024, merge=0/0, ticks=372/455, in_queue=827, util=88.28% 00:09:19.621 nvme0n2: ios=900/1024, merge=0/0, ticks=437/345, in_queue=782, util=88.78% 00:09:19.621 nvme0n3: ios=854/1024, merge=0/0, ticks=404/359, in_queue=763, util=89.19% 00:09:19.621 nvme0n4: ios=2048/2143, merge=0/0, ticks=437/362, in_queue=799, util=89.74% 00:09:19.621 10:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:19.621 [global] 00:09:19.621 thread=1 00:09:19.621 invalidate=1 00:09:19.621 rw=randwrite 00:09:19.621 time_based=1 00:09:19.621 runtime=1 00:09:19.621 ioengine=libaio 00:09:19.621 direct=1 00:09:19.621 bs=4096 00:09:19.621 iodepth=1 00:09:19.621 norandommap=0 00:09:19.621 numjobs=1 00:09:19.621 00:09:19.621 verify_dump=1 00:09:19.621 verify_backlog=512 00:09:19.621 verify_state_save=0 00:09:19.621 do_verify=1 00:09:19.621 verify=crc32c-intel 00:09:19.621 [job0] 00:09:19.621 filename=/dev/nvme0n1 00:09:19.621 [job1] 00:09:19.621 filename=/dev/nvme0n2 00:09:19.621 [job2] 00:09:19.621 filename=/dev/nvme0n3 00:09:19.621 [job3] 00:09:19.621 filename=/dev/nvme0n4 00:09:19.621 Could not set queue depth (nvme0n1) 00:09:19.621 Could not set queue depth (nvme0n2) 00:09:19.621 Could not set queue depth (nvme0n3) 00:09:19.621 Could not set queue depth (nvme0n4) 00:09:19.879 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:19.879 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:19.879 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:19.880 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:19.880 fio-3.35 00:09:19.880 Starting 4 threads 00:09:21.257 00:09:21.258 job0: (groupid=0, jobs=1): err= 0: pid=66367: Fri Nov 15 10:27:21 2024 00:09:21.258 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:09:21.258 slat (nsec): min=16190, max=64962, avg=26144.74, stdev=7539.33 00:09:21.258 clat (usec): min=270, max=822, avg=436.67, stdev=77.65 00:09:21.258 lat (usec): min=292, max=850, avg=462.82, stdev=79.96 00:09:21.258 clat percentiles (usec): 00:09:21.258 | 1.00th=[ 293], 5.00th=[ 355], 10.00th=[ 371], 20.00th=[ 388], 00:09:21.258 | 30.00th=[ 400], 40.00th=[ 408], 50.00th=[ 420], 60.00th=[ 433], 00:09:21.258 | 70.00th=[ 449], 80.00th=[ 469], 90.00th=[ 529], 95.00th=[ 619], 00:09:21.258 | 99.00th=[ 725], 99.50th=[ 750], 99.90th=[ 816], 99.95th=[ 824], 00:09:21.258 | 99.99th=[ 824] 00:09:21.258 write: IOPS=1301, BW=5207KiB/s (5332kB/s)(5212KiB/1001msec); 0 zone resets 00:09:21.258 slat (usec): min=21, max=103, avg=41.67, stdev= 9.77 00:09:21.258 clat (usec): min=109, max=1317, avg=355.57, stdev=104.10 00:09:21.258 lat (usec): min=155, max=1351, avg=397.23, stdev=107.71 00:09:21.258 clat percentiles (usec): 00:09:21.258 | 1.00th=[ 172], 5.00th=[ 200], 10.00th=[ 265], 20.00th=[ 285], 00:09:21.258 | 30.00th=[ 297], 40.00th=[ 318], 50.00th=[ 338], 60.00th=[ 363], 00:09:21.258 | 70.00th=[ 383], 80.00th=[ 412], 90.00th=[ 515], 95.00th=[ 553], 00:09:21.258 | 99.00th=[ 594], 99.50th=[ 660], 99.90th=[ 1156], 99.95th=[ 1319], 00:09:21.258 | 99.99th=[ 1319] 00:09:21.258 bw ( KiB/s): min= 5056, max= 5056, per=20.79%, avg=5056.00, stdev= 0.00, samples=1 00:09:21.258 iops : min= 1264, max= 1264, avg=1264.00, stdev= 0.00, samples=1 00:09:21.258 lat (usec) : 250=4.56%, 500=83.37%, 750=11.60%, 1000=0.34% 00:09:21.258 lat (msec) : 2=0.13% 00:09:21.258 cpu : usr=2.10%, sys=6.50%, ctx=2328, majf=0, minf=13 00:09:21.258 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:21.258 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.258 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.258 issued rwts: total=1024,1303,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:21.258 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:21.258 job1: (groupid=0, jobs=1): err= 0: pid=66368: Fri Nov 15 10:27:21 2024 00:09:21.258 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:09:21.258 slat (nsec): min=9019, max=53789, avg=14979.28, stdev=4757.12 00:09:21.258 clat (usec): min=162, max=2387, avg=330.56, stdev=77.65 00:09:21.258 lat (usec): min=179, max=2400, avg=345.54, stdev=77.92 00:09:21.258 clat percentiles (usec): 00:09:21.258 | 1.00th=[ 215], 5.00th=[ 265], 10.00th=[ 277], 20.00th=[ 289], 00:09:21.258 | 30.00th=[ 297], 40.00th=[ 306], 50.00th=[ 318], 60.00th=[ 326], 00:09:21.258 | 70.00th=[ 343], 80.00th=[ 363], 90.00th=[ 412], 95.00th=[ 445], 00:09:21.258 | 99.00th=[ 502], 99.50th=[ 537], 99.90th=[ 685], 99.95th=[ 2376], 00:09:21.258 | 99.99th=[ 2376] 00:09:21.258 write: IOPS=1734, BW=6937KiB/s (7104kB/s)(6944KiB/1001msec); 0 zone resets 00:09:21.258 slat (usec): min=12, max=120, avg=23.14, stdev= 7.49 00:09:21.258 clat (usec): min=101, max=1847, avg=243.43, stdev=69.97 00:09:21.258 lat (usec): min=123, max=1865, avg=266.57, stdev=70.81 00:09:21.258 clat percentiles (usec): 00:09:21.258 | 1.00th=[ 130], 5.00th=[ 147], 10.00th=[ 161], 20.00th=[ 194], 00:09:21.258 | 30.00th=[ 215], 40.00th=[ 231], 50.00th=[ 245], 60.00th=[ 260], 00:09:21.258 | 70.00th=[ 273], 80.00th=[ 289], 90.00th=[ 310], 95.00th=[ 330], 00:09:21.258 | 99.00th=[ 396], 99.50th=[ 429], 99.90th=[ 857], 99.95th=[ 1844], 00:09:21.258 | 99.99th=[ 1844] 00:09:21.258 bw ( KiB/s): min= 8192, max= 8192, per=33.69%, avg=8192.00, stdev= 0.00, samples=1 00:09:21.258 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:21.258 lat (usec) : 250=29.31%, 500=70.05%, 750=0.55%, 1000=0.03% 00:09:21.258 lat (msec) : 2=0.03%, 4=0.03% 00:09:21.258 cpu : usr=0.90%, sys=6.00%, ctx=3272, majf=0, minf=13 00:09:21.258 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:21.258 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.258 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.258 issued rwts: total=1536,1736,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:21.258 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:21.258 job2: (groupid=0, jobs=1): err= 0: pid=66369: Fri Nov 15 10:27:21 2024 00:09:21.258 read: IOPS=1513, BW=6054KiB/s (6199kB/s)(6060KiB/1001msec) 00:09:21.258 slat (nsec): min=9966, max=55683, avg=16883.40, stdev=5000.86 00:09:21.258 clat (usec): min=211, max=599, avg=327.05, stdev=50.74 00:09:21.258 lat (usec): min=225, max=614, avg=343.93, stdev=51.11 00:09:21.258 clat percentiles (usec): 00:09:21.258 | 1.00th=[ 253], 5.00th=[ 269], 10.00th=[ 277], 20.00th=[ 289], 00:09:21.258 | 30.00th=[ 297], 40.00th=[ 306], 50.00th=[ 314], 60.00th=[ 326], 00:09:21.258 | 70.00th=[ 338], 80.00th=[ 363], 90.00th=[ 404], 95.00th=[ 429], 00:09:21.258 | 99.00th=[ 478], 99.50th=[ 506], 99.90th=[ 537], 99.95th=[ 603], 00:09:21.258 | 99.99th=[ 603] 00:09:21.258 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:09:21.258 slat (usec): min=11, max=102, avg=26.38, stdev=11.84 00:09:21.258 clat (usec): min=131, max=7390, avg=281.00, stdev=242.86 00:09:21.258 lat (usec): min=159, max=7426, avg=307.38, stdev=245.61 00:09:21.258 clat percentiles (usec): 00:09:21.258 | 1.00th=[ 159], 5.00th=[ 186], 10.00th=[ 202], 20.00th=[ 219], 00:09:21.258 | 30.00th=[ 233], 40.00th=[ 247], 50.00th=[ 260], 60.00th=[ 269], 00:09:21.258 | 70.00th=[ 285], 80.00th=[ 306], 90.00th=[ 355], 95.00th=[ 408], 00:09:21.258 | 99.00th=[ 478], 99.50th=[ 857], 99.90th=[ 3589], 99.95th=[ 7373], 00:09:21.258 | 99.99th=[ 7373] 00:09:21.258 bw ( KiB/s): min= 7664, max= 7664, per=31.52%, avg=7664.00, stdev= 0.00, samples=1 00:09:21.258 iops : min= 1916, max= 1916, avg=1916.00, stdev= 0.00, samples=1 00:09:21.258 lat (usec) : 250=21.93%, 500=77.38%, 750=0.43%, 1000=0.03% 00:09:21.258 lat (msec) : 2=0.03%, 4=0.16%, 10=0.03% 00:09:21.258 cpu : usr=1.50%, sys=5.70%, ctx=3052, majf=0, minf=15 00:09:21.258 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:21.258 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.258 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.258 issued rwts: total=1515,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:21.258 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:21.258 job3: (groupid=0, jobs=1): err= 0: pid=66370: Fri Nov 15 10:27:21 2024 00:09:21.258 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:09:21.258 slat (nsec): min=17371, max=85255, avg=30793.92, stdev=9482.47 00:09:21.258 clat (usec): min=275, max=814, avg=425.86, stdev=72.28 00:09:21.258 lat (usec): min=301, max=862, avg=456.65, stdev=72.62 00:09:21.258 clat percentiles (usec): 00:09:21.258 | 1.00th=[ 318], 5.00th=[ 347], 10.00th=[ 363], 20.00th=[ 379], 00:09:21.258 | 30.00th=[ 392], 40.00th=[ 404], 50.00th=[ 412], 60.00th=[ 424], 00:09:21.258 | 70.00th=[ 441], 80.00th=[ 457], 90.00th=[ 490], 95.00th=[ 537], 00:09:21.258 | 99.00th=[ 750], 99.50th=[ 766], 99.90th=[ 791], 99.95th=[ 816], 00:09:21.258 | 99.99th=[ 816] 00:09:21.258 write: IOPS=1508, BW=6034KiB/s (6179kB/s)(6040KiB/1001msec); 0 zone resets 00:09:21.258 slat (nsec): min=23622, max=91215, avg=40294.21, stdev=8968.50 00:09:21.258 clat (usec): min=154, max=1285, avg=306.60, stdev=73.84 00:09:21.258 lat (usec): min=179, max=1336, avg=346.89, stdev=75.38 00:09:21.258 clat percentiles (usec): 00:09:21.258 | 1.00th=[ 184], 5.00th=[ 198], 10.00th=[ 215], 20.00th=[ 251], 00:09:21.258 | 30.00th=[ 273], 40.00th=[ 285], 50.00th=[ 302], 60.00th=[ 318], 00:09:21.258 | 70.00th=[ 343], 80.00th=[ 371], 90.00th=[ 392], 95.00th=[ 408], 00:09:21.258 | 99.00th=[ 449], 99.50th=[ 490], 99.90th=[ 1057], 99.95th=[ 1287], 00:09:21.258 | 99.99th=[ 1287] 00:09:21.258 bw ( KiB/s): min= 5760, max= 5760, per=23.69%, avg=5760.00, stdev= 0.00, samples=1 00:09:21.258 iops : min= 1440, max= 1440, avg=1440.00, stdev= 0.00, samples=1 00:09:21.258 lat (usec) : 250=11.76%, 500=84.96%, 750=2.76%, 1000=0.43% 00:09:21.258 lat (msec) : 2=0.08% 00:09:21.258 cpu : usr=2.30%, sys=7.00%, ctx=2537, majf=0, minf=6 00:09:21.258 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:21.258 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.258 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.258 issued rwts: total=1024,1510,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:21.258 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:21.258 00:09:21.258 Run status group 0 (all jobs): 00:09:21.259 READ: bw=19.9MiB/s (20.9MB/s), 4092KiB/s-6138KiB/s (4190kB/s-6285kB/s), io=19.9MiB (20.9MB), run=1001-1001msec 00:09:21.259 WRITE: bw=23.7MiB/s (24.9MB/s), 5207KiB/s-6937KiB/s (5332kB/s-7104kB/s), io=23.8MiB (24.9MB), run=1001-1001msec 00:09:21.259 00:09:21.259 Disk stats (read/write): 00:09:21.259 nvme0n1: ios=1040/1024, merge=0/0, ticks=445/366, in_queue=811, util=86.97% 00:09:21.259 nvme0n2: ios=1272/1536, merge=0/0, ticks=420/359, in_queue=779, util=87.35% 00:09:21.259 nvme0n3: ios=1088/1536, merge=0/0, ticks=349/390, in_queue=739, util=88.30% 00:09:21.259 nvme0n4: ios=1024/1051, merge=0/0, ticks=436/349, in_queue=785, util=89.60% 00:09:21.259 10:27:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:21.259 [global] 00:09:21.259 thread=1 00:09:21.259 invalidate=1 00:09:21.259 rw=write 00:09:21.259 time_based=1 00:09:21.259 runtime=1 00:09:21.259 ioengine=libaio 00:09:21.259 direct=1 00:09:21.259 bs=4096 00:09:21.259 iodepth=128 00:09:21.259 norandommap=0 00:09:21.259 numjobs=1 00:09:21.259 00:09:21.259 verify_dump=1 00:09:21.259 verify_backlog=512 00:09:21.259 verify_state_save=0 00:09:21.259 do_verify=1 00:09:21.259 verify=crc32c-intel 00:09:21.259 [job0] 00:09:21.259 filename=/dev/nvme0n1 00:09:21.259 [job1] 00:09:21.259 filename=/dev/nvme0n2 00:09:21.259 [job2] 00:09:21.259 filename=/dev/nvme0n3 00:09:21.259 [job3] 00:09:21.259 filename=/dev/nvme0n4 00:09:21.259 Could not set queue depth (nvme0n1) 00:09:21.259 Could not set queue depth (nvme0n2) 00:09:21.259 Could not set queue depth (nvme0n3) 00:09:21.259 Could not set queue depth (nvme0n4) 00:09:21.259 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:21.259 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:21.259 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:21.259 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:21.259 fio-3.35 00:09:21.259 Starting 4 threads 00:09:22.635 00:09:22.635 job0: (groupid=0, jobs=1): err= 0: pid=66435: Fri Nov 15 10:27:23 2024 00:09:22.635 read: IOPS=2907, BW=11.4MiB/s (11.9MB/s)(11.4MiB/1002msec) 00:09:22.635 slat (usec): min=6, max=6149, avg=166.39, stdev=840.72 00:09:22.635 clat (usec): min=611, max=25176, avg=21443.14, stdev=2680.97 00:09:22.635 lat (usec): min=5012, max=25216, avg=21609.53, stdev=2549.89 00:09:22.635 clat percentiles (usec): 00:09:22.635 | 1.00th=[ 5473], 5.00th=[17171], 10.00th=[19268], 20.00th=[20317], 00:09:22.635 | 30.00th=[20841], 40.00th=[21365], 50.00th=[21627], 60.00th=[22152], 00:09:22.635 | 70.00th=[22938], 80.00th=[23462], 90.00th=[23725], 95.00th=[23987], 00:09:22.635 | 99.00th=[24773], 99.50th=[25035], 99.90th=[25297], 99.95th=[25297], 00:09:22.635 | 99.99th=[25297] 00:09:22.635 write: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec); 0 zone resets 00:09:22.635 slat (usec): min=13, max=5658, avg=159.50, stdev=755.43 00:09:22.635 clat (usec): min=14759, max=23784, avg=20752.01, stdev=1508.16 00:09:22.635 lat (usec): min=14943, max=23813, avg=20911.51, stdev=1318.68 00:09:22.635 clat percentiles (usec): 00:09:22.635 | 1.00th=[15664], 5.00th=[19006], 10.00th=[19268], 20.00th=[19530], 00:09:22.635 | 30.00th=[19792], 40.00th=[20055], 50.00th=[20841], 60.00th=[21365], 00:09:22.635 | 70.00th=[21627], 80.00th=[22152], 90.00th=[22676], 95.00th=[22938], 00:09:22.635 | 99.00th=[23725], 99.50th=[23725], 99.90th=[23725], 99.95th=[23725], 00:09:22.635 | 99.99th=[23725] 00:09:22.635 bw ( KiB/s): min=12288, max=12312, per=24.12%, avg=12300.00, stdev=16.97, samples=2 00:09:22.635 iops : min= 3072, max= 3078, avg=3075.00, stdev= 4.24, samples=2 00:09:22.635 lat (usec) : 750=0.02% 00:09:22.635 lat (msec) : 10=0.53%, 20=25.50%, 50=73.95% 00:09:22.635 cpu : usr=3.00%, sys=9.19%, ctx=188, majf=0, minf=11 00:09:22.635 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:09:22.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:22.635 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:22.635 issued rwts: total=2913,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:22.635 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:22.635 job1: (groupid=0, jobs=1): err= 0: pid=66436: Fri Nov 15 10:27:23 2024 00:09:22.635 read: IOPS=3124, BW=12.2MiB/s (12.8MB/s)(12.3MiB/1004msec) 00:09:22.635 slat (usec): min=7, max=7174, avg=149.29, stdev=745.24 00:09:22.635 clat (usec): min=539, max=26257, avg=19478.92, stdev=2718.30 00:09:22.635 lat (usec): min=5022, max=26273, avg=19628.21, stdev=2620.51 00:09:22.635 clat percentiles (usec): 00:09:22.635 | 1.00th=[ 5473], 5.00th=[16909], 10.00th=[17695], 20.00th=[17957], 00:09:22.635 | 30.00th=[18220], 40.00th=[18744], 50.00th=[19268], 60.00th=[20055], 00:09:22.635 | 70.00th=[20579], 80.00th=[21365], 90.00th=[22152], 95.00th=[23462], 00:09:22.635 | 99.00th=[25297], 99.50th=[26084], 99.90th=[26346], 99.95th=[26346], 00:09:22.635 | 99.99th=[26346] 00:09:22.635 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:09:22.635 slat (usec): min=10, max=5856, avg=140.63, stdev=658.64 00:09:22.635 clat (usec): min=11451, max=21563, avg=18274.23, stdev=1853.88 00:09:22.635 lat (usec): min=12315, max=22279, avg=18414.86, stdev=1751.26 00:09:22.635 clat percentiles (usec): 00:09:22.635 | 1.00th=[13435], 5.00th=[15008], 10.00th=[15401], 20.00th=[16909], 00:09:22.635 | 30.00th=[17695], 40.00th=[17957], 50.00th=[18220], 60.00th=[18744], 00:09:22.635 | 70.00th=[19006], 80.00th=[20055], 90.00th=[20841], 95.00th=[21103], 00:09:22.635 | 99.00th=[21365], 99.50th=[21365], 99.90th=[21627], 99.95th=[21627], 00:09:22.635 | 99.99th=[21627] 00:09:22.635 bw ( KiB/s): min=12577, max=15616, per=27.64%, avg=14096.50, stdev=2148.90, samples=2 00:09:22.635 iops : min= 3144, max= 3904, avg=3524.00, stdev=537.40, samples=2 00:09:22.635 lat (usec) : 750=0.01% 00:09:22.635 lat (msec) : 10=0.95%, 20=68.71%, 50=30.32% 00:09:22.635 cpu : usr=3.79%, sys=9.97%, ctx=213, majf=0, minf=14 00:09:22.635 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:09:22.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:22.635 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:22.635 issued rwts: total=3137,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:22.635 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:22.635 job2: (groupid=0, jobs=1): err= 0: pid=66437: Fri Nov 15 10:27:23 2024 00:09:22.635 read: IOPS=2740, BW=10.7MiB/s (11.2MB/s)(10.7MiB/1004msec) 00:09:22.635 slat (usec): min=7, max=6523, avg=169.50, stdev=719.12 00:09:22.635 clat (usec): min=711, max=29272, avg=21667.23, stdev=2997.47 00:09:22.635 lat (usec): min=3825, max=29308, avg=21836.74, stdev=3045.43 00:09:22.635 clat percentiles (usec): 00:09:22.635 | 1.00th=[ 7308], 5.00th=[17957], 10.00th=[19268], 20.00th=[20579], 00:09:22.635 | 30.00th=[21103], 40.00th=[21365], 50.00th=[21890], 60.00th=[22152], 00:09:22.635 | 70.00th=[22938], 80.00th=[23462], 90.00th=[24249], 95.00th=[25822], 00:09:22.635 | 99.00th=[27132], 99.50th=[27395], 99.90th=[28705], 99.95th=[28705], 00:09:22.635 | 99.99th=[29230] 00:09:22.635 write: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec); 0 zone resets 00:09:22.635 slat (usec): min=11, max=6997, avg=165.84, stdev=817.86 00:09:22.635 clat (usec): min=16121, max=29290, avg=21773.79, stdev=1894.58 00:09:22.635 lat (usec): min=16154, max=29369, avg=21939.63, stdev=2034.31 00:09:22.635 clat percentiles (usec): 00:09:22.635 | 1.00th=[17171], 5.00th=[19006], 10.00th=[19530], 20.00th=[20579], 00:09:22.635 | 30.00th=[20841], 40.00th=[21365], 50.00th=[21627], 60.00th=[21890], 00:09:22.635 | 70.00th=[22414], 80.00th=[22938], 90.00th=[24249], 95.00th=[25560], 00:09:22.635 | 99.00th=[27657], 99.50th=[28443], 99.90th=[28967], 99.95th=[29230], 00:09:22.635 | 99.99th=[29230] 00:09:22.635 bw ( KiB/s): min=12288, max=12312, per=24.12%, avg=12300.00, stdev=16.97, samples=2 00:09:22.635 iops : min= 3072, max= 3078, avg=3075.00, stdev= 4.24, samples=2 00:09:22.635 lat (usec) : 750=0.02% 00:09:22.635 lat (msec) : 4=0.15%, 10=0.55%, 20=13.34%, 50=85.94% 00:09:22.635 cpu : usr=2.29%, sys=10.47%, ctx=255, majf=0, minf=9 00:09:22.635 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:09:22.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:22.635 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:22.635 issued rwts: total=2751,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:22.635 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:22.635 job3: (groupid=0, jobs=1): err= 0: pid=66438: Fri Nov 15 10:27:23 2024 00:09:22.636 read: IOPS=3008, BW=11.8MiB/s (12.3MB/s)(11.8MiB/1004msec) 00:09:22.636 slat (usec): min=6, max=6847, avg=157.73, stdev=672.52 00:09:22.636 clat (usec): min=687, max=27946, avg=20528.05, stdev=2658.17 00:09:22.636 lat (usec): min=3717, max=27989, avg=20685.78, stdev=2702.00 00:09:22.636 clat percentiles (usec): 00:09:22.636 | 1.00th=[ 7177], 5.00th=[17171], 10.00th=[18744], 20.00th=[19530], 00:09:22.636 | 30.00th=[20055], 40.00th=[20317], 50.00th=[20579], 60.00th=[20841], 00:09:22.636 | 70.00th=[21365], 80.00th=[21890], 90.00th=[22938], 95.00th=[23987], 00:09:22.636 | 99.00th=[25560], 99.50th=[25822], 99.90th=[26346], 99.95th=[27395], 00:09:22.636 | 99.99th=[27919] 00:09:22.636 write: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec); 0 zone resets 00:09:22.636 slat (usec): min=15, max=7141, avg=162.04, stdev=859.22 00:09:22.636 clat (usec): min=15556, max=31074, avg=20953.15, stdev=1834.40 00:09:22.636 lat (usec): min=15579, max=31148, avg=21115.19, stdev=2003.45 00:09:22.636 clat percentiles (usec): 00:09:22.636 | 1.00th=[16712], 5.00th=[18744], 10.00th=[19268], 20.00th=[19792], 00:09:22.636 | 30.00th=[20055], 40.00th=[20317], 50.00th=[20579], 60.00th=[20841], 00:09:22.636 | 70.00th=[21365], 80.00th=[22152], 90.00th=[23462], 95.00th=[24511], 00:09:22.636 | 99.00th=[26346], 99.50th=[26870], 99.90th=[30016], 99.95th=[30540], 00:09:22.636 | 99.99th=[31065] 00:09:22.636 bw ( KiB/s): min=12288, max=12288, per=24.10%, avg=12288.00, stdev= 0.00, samples=2 00:09:22.636 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:09:22.636 lat (usec) : 750=0.02% 00:09:22.636 lat (msec) : 4=0.26%, 10=0.43%, 20=26.88%, 50=72.41% 00:09:22.636 cpu : usr=3.39%, sys=9.47%, ctx=220, majf=0, minf=8 00:09:22.636 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:09:22.636 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:22.636 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:22.636 issued rwts: total=3021,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:22.636 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:22.636 00:09:22.636 Run status group 0 (all jobs): 00:09:22.636 READ: bw=46.0MiB/s (48.2MB/s), 10.7MiB/s-12.2MiB/s (11.2MB/s-12.8MB/s), io=46.2MiB (48.4MB), run=1002-1004msec 00:09:22.636 WRITE: bw=49.8MiB/s (52.2MB/s), 12.0MiB/s-13.9MiB/s (12.5MB/s-14.6MB/s), io=50.0MiB (52.4MB), run=1002-1004msec 00:09:22.636 00:09:22.636 Disk stats (read/write): 00:09:22.636 nvme0n1: ios=2610/2560, merge=0/0, ticks=13117/11971, in_queue=25088, util=88.19% 00:09:22.636 nvme0n2: ios=2673/3072, merge=0/0, ticks=12158/12571, in_queue=24729, util=87.97% 00:09:22.636 nvme0n3: ios=2426/2560, merge=0/0, ticks=17310/16039, in_queue=33349, util=88.91% 00:09:22.636 nvme0n4: ios=2560/2608, merge=0/0, ticks=17251/15879, in_queue=33130, util=89.67% 00:09:22.636 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:22.636 [global] 00:09:22.636 thread=1 00:09:22.636 invalidate=1 00:09:22.636 rw=randwrite 00:09:22.636 time_based=1 00:09:22.636 runtime=1 00:09:22.636 ioengine=libaio 00:09:22.636 direct=1 00:09:22.636 bs=4096 00:09:22.636 iodepth=128 00:09:22.636 norandommap=0 00:09:22.636 numjobs=1 00:09:22.636 00:09:22.636 verify_dump=1 00:09:22.636 verify_backlog=512 00:09:22.636 verify_state_save=0 00:09:22.636 do_verify=1 00:09:22.636 verify=crc32c-intel 00:09:22.636 [job0] 00:09:22.636 filename=/dev/nvme0n1 00:09:22.636 [job1] 00:09:22.636 filename=/dev/nvme0n2 00:09:22.636 [job2] 00:09:22.636 filename=/dev/nvme0n3 00:09:22.636 [job3] 00:09:22.636 filename=/dev/nvme0n4 00:09:22.636 Could not set queue depth (nvme0n1) 00:09:22.636 Could not set queue depth (nvme0n2) 00:09:22.636 Could not set queue depth (nvme0n3) 00:09:22.636 Could not set queue depth (nvme0n4) 00:09:22.636 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:22.636 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:22.636 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:22.636 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:22.636 fio-3.35 00:09:22.636 Starting 4 threads 00:09:24.022 00:09:24.022 job0: (groupid=0, jobs=1): err= 0: pid=66492: Fri Nov 15 10:27:24 2024 00:09:24.022 read: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec) 00:09:24.022 slat (usec): min=7, max=9502, avg=133.19, stdev=877.03 00:09:24.022 clat (usec): min=5793, max=30834, avg=18322.00, stdev=2440.04 00:09:24.022 lat (usec): min=5807, max=36689, avg=18455.19, stdev=2468.95 00:09:24.022 clat percentiles (usec): 00:09:24.022 | 1.00th=[11207], 5.00th=[13042], 10.00th=[16450], 20.00th=[17171], 00:09:24.022 | 30.00th=[17695], 40.00th=[18220], 50.00th=[18482], 60.00th=[18744], 00:09:24.022 | 70.00th=[19268], 80.00th=[19530], 90.00th=[20579], 95.00th=[20841], 00:09:24.022 | 99.00th=[27395], 99.50th=[30016], 99.90th=[30802], 99.95th=[30802], 00:09:24.022 | 99.99th=[30802] 00:09:24.022 write: IOPS=3585, BW=14.0MiB/s (14.7MB/s)(14.1MiB/1004msec); 0 zone resets 00:09:24.022 slat (usec): min=6, max=14393, avg=137.28, stdev=894.92 00:09:24.022 clat (usec): min=1848, max=24798, avg=17074.80, stdev=2173.43 00:09:24.022 lat (usec): min=3918, max=27468, avg=17212.08, stdev=2070.51 00:09:24.022 clat percentiles (usec): 00:09:24.022 | 1.00th=[ 9896], 5.00th=[14615], 10.00th=[15270], 20.00th=[15795], 00:09:24.022 | 30.00th=[16188], 40.00th=[16450], 50.00th=[16909], 60.00th=[17171], 00:09:24.022 | 70.00th=[17695], 80.00th=[18220], 90.00th=[19530], 95.00th=[20055], 00:09:24.022 | 99.00th=[24249], 99.50th=[24249], 99.90th=[24773], 99.95th=[24773], 00:09:24.022 | 99.99th=[24773] 00:09:24.022 bw ( KiB/s): min=12800, max=15872, per=31.87%, avg=14336.00, stdev=2172.23, samples=2 00:09:24.022 iops : min= 3200, max= 3968, avg=3584.00, stdev=543.06, samples=2 00:09:24.022 lat (msec) : 2=0.01%, 4=0.04%, 10=0.67%, 20=88.50%, 50=10.77% 00:09:24.022 cpu : usr=4.19%, sys=9.17%, ctx=147, majf=0, minf=3 00:09:24.022 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:09:24.022 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.022 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:24.022 issued rwts: total=3584,3600,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:24.022 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:24.022 job1: (groupid=0, jobs=1): err= 0: pid=66493: Fri Nov 15 10:27:24 2024 00:09:24.022 read: IOPS=1896, BW=7584KiB/s (7766kB/s)(7660KiB/1010msec) 00:09:24.022 slat (usec): min=7, max=17460, avg=241.09, stdev=1691.04 00:09:24.023 clat (usec): min=854, max=53351, avg=31910.70, stdev=5270.68 00:09:24.023 lat (usec): min=11092, max=65780, avg=32151.80, stdev=5301.63 00:09:24.023 clat percentiles (usec): 00:09:24.023 | 1.00th=[11469], 5.00th=[19530], 10.00th=[27395], 20.00th=[31065], 00:09:24.023 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32637], 60.00th=[33424], 00:09:24.023 | 70.00th=[33817], 80.00th=[34341], 90.00th=[35390], 95.00th=[36439], 00:09:24.023 | 99.00th=[49546], 99.50th=[51119], 99.90th=[53216], 99.95th=[53216], 00:09:24.023 | 99.99th=[53216] 00:09:24.023 write: IOPS=2027, BW=8111KiB/s (8306kB/s)(8192KiB/1010msec); 0 zone resets 00:09:24.023 slat (usec): min=6, max=31651, avg=258.40, stdev=1869.43 00:09:24.023 clat (usec): min=15115, max=52326, avg=32642.54, stdev=4646.64 00:09:24.023 lat (usec): min=18163, max=52356, avg=32900.94, stdev=4343.85 00:09:24.023 clat percentiles (usec): 00:09:24.023 | 1.00th=[18220], 5.00th=[27132], 10.00th=[28443], 20.00th=[30540], 00:09:24.023 | 30.00th=[31065], 40.00th=[31589], 50.00th=[32637], 60.00th=[33424], 00:09:24.023 | 70.00th=[33817], 80.00th=[34341], 90.00th=[35914], 95.00th=[35914], 00:09:24.023 | 99.00th=[51643], 99.50th=[52167], 99.90th=[52167], 99.95th=[52167], 00:09:24.023 | 99.99th=[52167] 00:09:24.023 bw ( KiB/s): min= 8192, max= 8192, per=18.21%, avg=8192.00, stdev= 0.00, samples=2 00:09:24.023 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:09:24.023 lat (usec) : 1000=0.03% 00:09:24.023 lat (msec) : 20=3.38%, 50=94.60%, 100=1.99% 00:09:24.023 cpu : usr=2.18%, sys=5.65%, ctx=84, majf=0, minf=11 00:09:24.023 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:09:24.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.023 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:24.023 issued rwts: total=1915,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:24.023 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:24.023 job2: (groupid=0, jobs=1): err= 0: pid=66494: Fri Nov 15 10:27:24 2024 00:09:24.023 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:09:24.023 slat (usec): min=7, max=9944, avg=129.82, stdev=857.08 00:09:24.023 clat (usec): min=10082, max=30157, avg=17908.02, stdev=2059.89 00:09:24.023 lat (usec): min=10094, max=35455, avg=18037.84, stdev=2085.59 00:09:24.023 clat percentiles (usec): 00:09:24.023 | 1.00th=[11076], 5.00th=[15664], 10.00th=[16712], 20.00th=[16909], 00:09:24.023 | 30.00th=[17433], 40.00th=[17695], 50.00th=[17957], 60.00th=[18220], 00:09:24.023 | 70.00th=[18482], 80.00th=[19006], 90.00th=[19530], 95.00th=[19530], 00:09:24.023 | 99.00th=[26608], 99.50th=[28443], 99.90th=[30278], 99.95th=[30278], 00:09:24.023 | 99.99th=[30278] 00:09:24.023 write: IOPS=3644, BW=14.2MiB/s (14.9MB/s)(14.3MiB/1005msec); 0 zone resets 00:09:24.023 slat (usec): min=6, max=15368, avg=138.56, stdev=899.83 00:09:24.023 clat (usec): min=1764, max=26076, avg=17258.09, stdev=2370.16 00:09:24.023 lat (usec): min=7669, max=26101, avg=17396.65, stdev=2231.33 00:09:24.023 clat percentiles (usec): 00:09:24.023 | 1.00th=[ 8717], 5.00th=[13698], 10.00th=[14877], 20.00th=[16188], 00:09:24.023 | 30.00th=[16909], 40.00th=[17171], 50.00th=[17695], 60.00th=[17957], 00:09:24.023 | 70.00th=[18220], 80.00th=[18482], 90.00th=[19268], 95.00th=[19530], 00:09:24.023 | 99.00th=[25560], 99.50th=[25822], 99.90th=[26084], 99.95th=[26084], 00:09:24.023 | 99.99th=[26084] 00:09:24.023 bw ( KiB/s): min=12808, max=15864, per=31.87%, avg=14336.00, stdev=2160.92, samples=2 00:09:24.023 iops : min= 3202, max= 3966, avg=3584.00, stdev=540.23, samples=2 00:09:24.023 lat (msec) : 2=0.01%, 10=1.03%, 20=96.00%, 50=2.95% 00:09:24.023 cpu : usr=2.99%, sys=10.56%, ctx=159, majf=0, minf=4 00:09:24.023 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:09:24.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.023 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:24.023 issued rwts: total=3584,3663,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:24.023 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:24.023 job3: (groupid=0, jobs=1): err= 0: pid=66495: Fri Nov 15 10:27:24 2024 00:09:24.023 read: IOPS=1896, BW=7584KiB/s (7766kB/s)(7660KiB/1010msec) 00:09:24.023 slat (usec): min=7, max=17636, avg=240.39, stdev=1706.45 00:09:24.023 clat (usec): min=2469, max=53278, avg=31930.35, stdev=5270.30 00:09:24.023 lat (usec): min=11015, max=65717, avg=32170.74, stdev=5310.55 00:09:24.023 clat percentiles (usec): 00:09:24.023 | 1.00th=[11338], 5.00th=[19792], 10.00th=[27395], 20.00th=[30802], 00:09:24.023 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32637], 60.00th=[33424], 00:09:24.023 | 70.00th=[34341], 80.00th=[34341], 90.00th=[35390], 95.00th=[35390], 00:09:24.023 | 99.00th=[49546], 99.50th=[51119], 99.90th=[53216], 99.95th=[53216], 00:09:24.023 | 99.99th=[53216] 00:09:24.023 write: IOPS=2027, BW=8111KiB/s (8306kB/s)(8192KiB/1010msec); 0 zone resets 00:09:24.023 slat (usec): min=7, max=30778, avg=258.22, stdev=1856.08 00:09:24.023 clat (usec): min=15337, max=51409, avg=32607.32, stdev=4568.11 00:09:24.023 lat (usec): min=19081, max=51436, avg=32865.54, stdev=4263.19 00:09:24.023 clat percentiles (usec): 00:09:24.023 | 1.00th=[19006], 5.00th=[27132], 10.00th=[28705], 20.00th=[30278], 00:09:24.023 | 30.00th=[31065], 40.00th=[31589], 50.00th=[32637], 60.00th=[33162], 00:09:24.023 | 70.00th=[33817], 80.00th=[34341], 90.00th=[35914], 95.00th=[35914], 00:09:24.023 | 99.00th=[51119], 99.50th=[51119], 99.90th=[51119], 99.95th=[51643], 00:09:24.023 | 99.99th=[51643] 00:09:24.023 bw ( KiB/s): min= 8192, max= 8192, per=18.21%, avg=8192.00, stdev= 0.00, samples=2 00:09:24.023 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:09:24.023 lat (msec) : 4=0.03%, 20=3.53%, 50=94.42%, 100=2.02% 00:09:24.023 cpu : usr=1.88%, sys=6.24%, ctx=80, majf=0, minf=5 00:09:24.023 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:09:24.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.023 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:24.023 issued rwts: total=1915,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:24.023 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:24.023 00:09:24.023 Run status group 0 (all jobs): 00:09:24.023 READ: bw=42.5MiB/s (44.6MB/s), 7584KiB/s-13.9MiB/s (7766kB/s-14.6MB/s), io=43.0MiB (45.0MB), run=1004-1010msec 00:09:24.023 WRITE: bw=43.9MiB/s (46.1MB/s), 8111KiB/s-14.2MiB/s (8306kB/s-14.9MB/s), io=44.4MiB (46.5MB), run=1004-1010msec 00:09:24.023 00:09:24.023 Disk stats (read/write): 00:09:24.023 nvme0n1: ios=3075/3072, merge=0/0, ticks=53105/50030, in_queue=103135, util=88.48% 00:09:24.023 nvme0n2: ios=1585/1792, merge=0/0, ticks=48673/56281, in_queue=104954, util=88.88% 00:09:24.023 nvme0n3: ios=3092/3080, merge=0/0, ticks=52384/50330, in_queue=102714, util=89.80% 00:09:24.023 nvme0n4: ios=1536/1792, merge=0/0, ticks=48670/56228, in_queue=104898, util=89.74% 00:09:24.023 10:27:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:24.023 10:27:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=66508 00:09:24.023 10:27:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:24.023 10:27:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:24.023 [global] 00:09:24.023 thread=1 00:09:24.023 invalidate=1 00:09:24.023 rw=read 00:09:24.023 time_based=1 00:09:24.023 runtime=10 00:09:24.023 ioengine=libaio 00:09:24.023 direct=1 00:09:24.023 bs=4096 00:09:24.023 iodepth=1 00:09:24.023 norandommap=1 00:09:24.023 numjobs=1 00:09:24.023 00:09:24.023 [job0] 00:09:24.023 filename=/dev/nvme0n1 00:09:24.023 [job1] 00:09:24.023 filename=/dev/nvme0n2 00:09:24.023 [job2] 00:09:24.023 filename=/dev/nvme0n3 00:09:24.023 [job3] 00:09:24.023 filename=/dev/nvme0n4 00:09:24.023 Could not set queue depth (nvme0n1) 00:09:24.023 Could not set queue depth (nvme0n2) 00:09:24.023 Could not set queue depth (nvme0n3) 00:09:24.023 Could not set queue depth (nvme0n4) 00:09:24.023 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:24.023 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:24.023 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:24.023 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:24.023 fio-3.35 00:09:24.023 Starting 4 threads 00:09:27.305 10:27:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:27.305 fio: pid=66557, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:27.305 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=29134848, buflen=4096 00:09:27.305 10:27:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:27.305 fio: pid=66556, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:27.305 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=51511296, buflen=4096 00:09:27.305 10:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:27.305 10:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:27.564 fio: pid=66550, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:27.564 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=37814272, buflen=4096 00:09:27.564 10:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:27.564 10:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:27.823 fio: pid=66554, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:27.824 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=43933696, buflen=4096 00:09:28.083 00:09:28.083 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66550: Fri Nov 15 10:27:28 2024 00:09:28.083 read: IOPS=2609, BW=10.2MiB/s (10.7MB/s)(36.1MiB/3538msec) 00:09:28.083 slat (usec): min=7, max=15529, avg=24.39, stdev=236.28 00:09:28.083 clat (usec): min=124, max=3976, avg=356.51, stdev=127.45 00:09:28.083 lat (usec): min=138, max=15878, avg=380.90, stdev=268.74 00:09:28.083 clat percentiles (usec): 00:09:28.083 | 1.00th=[ 137], 5.00th=[ 143], 10.00th=[ 151], 20.00th=[ 212], 00:09:28.083 | 30.00th=[ 330], 40.00th=[ 379], 50.00th=[ 404], 60.00th=[ 420], 00:09:28.083 | 70.00th=[ 433], 80.00th=[ 445], 90.00th=[ 465], 95.00th=[ 482], 00:09:28.083 | 99.00th=[ 523], 99.50th=[ 545], 99.90th=[ 693], 99.95th=[ 1434], 00:09:28.083 | 99.99th=[ 3982] 00:09:28.083 bw ( KiB/s): min= 8464, max=10216, per=21.78%, avg=9050.67, stdev=613.09, samples=6 00:09:28.083 iops : min= 2116, max= 2554, avg=2262.67, stdev=153.27, samples=6 00:09:28.083 lat (usec) : 250=23.24%, 500=74.32%, 750=2.34%, 1000=0.02% 00:09:28.083 lat (msec) : 2=0.03%, 4=0.03% 00:09:28.083 cpu : usr=1.13%, sys=4.81%, ctx=9250, majf=0, minf=1 00:09:28.083 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:28.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:28.083 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:28.083 issued rwts: total=9233,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:28.083 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:28.083 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66554: Fri Nov 15 10:27:28 2024 00:09:28.083 read: IOPS=2810, BW=11.0MiB/s (11.5MB/s)(41.9MiB/3817msec) 00:09:28.083 slat (usec): min=7, max=21493, avg=21.56, stdev=294.82 00:09:28.083 clat (usec): min=125, max=5840, avg=332.80, stdev=164.09 00:09:28.083 lat (usec): min=138, max=21794, avg=354.36, stdev=337.08 00:09:28.083 clat percentiles (usec): 00:09:28.083 | 1.00th=[ 135], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 155], 00:09:28.084 | 30.00th=[ 200], 40.00th=[ 338], 50.00th=[ 383], 60.00th=[ 408], 00:09:28.084 | 70.00th=[ 429], 80.00th=[ 449], 90.00th=[ 474], 95.00th=[ 494], 00:09:28.084 | 99.00th=[ 537], 99.50th=[ 553], 99.90th=[ 840], 99.95th=[ 1500], 00:09:28.084 | 99.99th=[ 5407] 00:09:28.084 bw ( KiB/s): min= 8464, max=17400, per=24.78%, avg=10295.43, stdev=3186.99, samples=7 00:09:28.084 iops : min= 2116, max= 4350, avg=2573.86, stdev=796.75, samples=7 00:09:28.084 lat (usec) : 250=33.91%, 500=62.38%, 750=3.60%, 1000=0.02% 00:09:28.084 lat (msec) : 2=0.05%, 4=0.01%, 10=0.04% 00:09:28.084 cpu : usr=0.94%, sys=4.01%, ctx=10744, majf=0, minf=2 00:09:28.084 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:28.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:28.084 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:28.084 issued rwts: total=10727,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:28.084 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:28.084 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66556: Fri Nov 15 10:27:28 2024 00:09:28.084 read: IOPS=3855, BW=15.1MiB/s (15.8MB/s)(49.1MiB/3262msec) 00:09:28.084 slat (usec): min=10, max=10800, avg=17.61, stdev=118.02 00:09:28.084 clat (usec): min=156, max=2384, avg=240.19, stdev=43.22 00:09:28.084 lat (usec): min=168, max=11116, avg=257.80, stdev=127.06 00:09:28.084 clat percentiles (usec): 00:09:28.084 | 1.00th=[ 184], 5.00th=[ 200], 10.00th=[ 208], 20.00th=[ 217], 00:09:28.084 | 30.00th=[ 223], 40.00th=[ 229], 50.00th=[ 235], 60.00th=[ 241], 00:09:28.084 | 70.00th=[ 249], 80.00th=[ 260], 90.00th=[ 281], 95.00th=[ 302], 00:09:28.084 | 99.00th=[ 347], 99.50th=[ 363], 99.90th=[ 420], 99.95th=[ 668], 00:09:28.084 | 99.99th=[ 1909] 00:09:28.084 bw ( KiB/s): min=13832, max=16624, per=37.77%, avg=15694.67, stdev=994.42, samples=6 00:09:28.084 iops : min= 3458, max= 4156, avg=3923.67, stdev=248.60, samples=6 00:09:28.084 lat (usec) : 250=71.78%, 500=28.14%, 750=0.04%, 1000=0.01% 00:09:28.084 lat (msec) : 2=0.02%, 4=0.01% 00:09:28.084 cpu : usr=1.32%, sys=5.55%, ctx=12581, majf=0, minf=2 00:09:28.084 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:28.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:28.084 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:28.084 issued rwts: total=12577,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:28.084 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:28.084 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66557: Fri Nov 15 10:27:28 2024 00:09:28.084 read: IOPS=2376, BW=9506KiB/s (9734kB/s)(27.8MiB/2993msec) 00:09:28.084 slat (usec): min=9, max=108, avg=16.85, stdev= 7.40 00:09:28.084 clat (usec): min=181, max=6537, avg=402.03, stdev=167.53 00:09:28.084 lat (usec): min=193, max=6549, avg=418.88, stdev=168.84 00:09:28.084 clat percentiles (usec): 00:09:28.084 | 1.00th=[ 202], 5.00th=[ 221], 10.00th=[ 235], 20.00th=[ 359], 00:09:28.084 | 30.00th=[ 392], 40.00th=[ 408], 50.00th=[ 420], 60.00th=[ 433], 00:09:28.084 | 70.00th=[ 445], 80.00th=[ 461], 90.00th=[ 482], 95.00th=[ 498], 00:09:28.084 | 99.00th=[ 537], 99.50th=[ 562], 99.90th=[ 2474], 99.95th=[ 5080], 00:09:28.084 | 99.99th=[ 6521] 00:09:28.084 bw ( KiB/s): min= 8472, max=12816, per=23.02%, avg=9563.20, stdev=1828.54, samples=5 00:09:28.084 iops : min= 2118, max= 3204, avg=2390.80, stdev=457.14, samples=5 00:09:28.084 lat (usec) : 250=14.31%, 500=81.09%, 750=4.40%, 1000=0.04% 00:09:28.084 lat (msec) : 2=0.01%, 4=0.04%, 10=0.08% 00:09:28.084 cpu : usr=1.07%, sys=3.71%, ctx=7117, majf=0, minf=1 00:09:28.084 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:28.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:28.084 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:28.084 issued rwts: total=7114,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:28.084 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:28.084 00:09:28.084 Run status group 0 (all jobs): 00:09:28.084 READ: bw=40.6MiB/s (42.5MB/s), 9506KiB/s-15.1MiB/s (9734kB/s-15.8MB/s), io=155MiB (162MB), run=2993-3817msec 00:09:28.084 00:09:28.084 Disk stats (read/write): 00:09:28.084 nvme0n1: ios=8249/0, merge=0/0, ticks=2974/0, in_queue=2974, util=95.11% 00:09:28.084 nvme0n2: ios=9502/0, merge=0/0, ticks=3071/0, in_queue=3071, util=95.00% 00:09:28.084 nvme0n3: ios=12118/0, merge=0/0, ticks=2944/0, in_queue=2944, util=96.27% 00:09:28.084 nvme0n4: ios=6827/0, merge=0/0, ticks=2514/0, in_queue=2514, util=96.39% 00:09:28.084 10:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:28.084 10:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:28.343 10:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:28.343 10:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:28.616 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:28.616 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:28.879 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:28.879 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:29.138 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:29.138 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:29.396 10:27:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:29.396 10:27:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 66508 00:09:29.396 10:27:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:29.396 10:27:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:29.396 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:29.396 10:27:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:29.397 10:27:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:09:29.397 10:27:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:29.397 10:27:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:09:29.397 10:27:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:09:29.397 10:27:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:29.397 nvmf hotplug test: fio failed as expected 00:09:29.397 10:27:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:09:29.397 10:27:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:29.397 10:27:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:29.397 10:27:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:29.965 10:27:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:29.965 10:27:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:29.965 10:27:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:29.965 10:27:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:29.965 10:27:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:29.965 10:27:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:29.965 10:27:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:29.965 10:27:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:29.965 10:27:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:29.965 10:27:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:29.965 10:27:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:29.965 rmmod nvme_tcp 00:09:29.965 rmmod nvme_fabrics 00:09:29.965 rmmod nvme_keyring 00:09:29.965 10:27:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:29.965 10:27:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:29.965 10:27:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:29.965 10:27:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 66119 ']' 00:09:29.965 10:27:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 66119 00:09:29.965 10:27:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 66119 ']' 00:09:29.965 10:27:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 66119 00:09:29.965 10:27:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:09:29.965 10:27:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:29.965 10:27:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 66119 00:09:29.965 10:27:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:29.965 10:27:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:29.965 killing process with pid 66119 00:09:29.965 10:27:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 66119' 00:09:29.965 10:27:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 66119 00:09:29.965 10:27:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 66119 00:09:30.224 10:27:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:30.224 10:27:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:30.224 10:27:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:30.224 10:27:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:30.224 10:27:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:09:30.224 10:27:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:09:30.224 10:27:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:30.224 10:27:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:30.224 10:27:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:30.224 10:27:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:30.224 10:27:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:30.224 10:27:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:30.224 10:27:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:30.224 10:27:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:30.224 10:27:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:30.224 10:27:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:30.224 10:27:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:30.224 10:27:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:30.224 10:27:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:30.224 10:27:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:30.224 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:30.224 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:30.224 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:30.224 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:30.224 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:30.224 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:30.224 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:09:30.224 00:09:30.224 real 0m20.891s 00:09:30.224 user 1m20.664s 00:09:30.224 sys 0m8.752s 00:09:30.224 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:30.224 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:30.224 ************************************ 00:09:30.224 END TEST nvmf_fio_target 00:09:30.224 ************************************ 00:09:30.482 10:27:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:30.482 10:27:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:30.482 10:27:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:30.483 10:27:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:30.483 ************************************ 00:09:30.483 START TEST nvmf_bdevio 00:09:30.483 ************************************ 00:09:30.483 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:30.483 * Looking for test storage... 00:09:30.483 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:30.483 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:30.483 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:09:30.483 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:30.483 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:30.483 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:30.483 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:30.483 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:30.483 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:30.483 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:30.483 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:30.483 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:30.483 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:30.483 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:30.483 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:30.483 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:30.483 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:30.483 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:30.483 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:30.483 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:30.483 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:30.483 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:30.483 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:30.483 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:30.483 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:30.483 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:30.483 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:30.483 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:30.483 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:30.483 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:30.483 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:30.483 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:30.483 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:30.483 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:30.483 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:30.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.483 --rc genhtml_branch_coverage=1 00:09:30.483 --rc genhtml_function_coverage=1 00:09:30.483 --rc genhtml_legend=1 00:09:30.483 --rc geninfo_all_blocks=1 00:09:30.483 --rc geninfo_unexecuted_blocks=1 00:09:30.483 00:09:30.483 ' 00:09:30.483 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:30.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.483 --rc genhtml_branch_coverage=1 00:09:30.483 --rc genhtml_function_coverage=1 00:09:30.483 --rc genhtml_legend=1 00:09:30.483 --rc geninfo_all_blocks=1 00:09:30.483 --rc geninfo_unexecuted_blocks=1 00:09:30.483 00:09:30.483 ' 00:09:30.483 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:30.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.483 --rc genhtml_branch_coverage=1 00:09:30.483 --rc genhtml_function_coverage=1 00:09:30.483 --rc genhtml_legend=1 00:09:30.483 --rc geninfo_all_blocks=1 00:09:30.483 --rc geninfo_unexecuted_blocks=1 00:09:30.483 00:09:30.483 ' 00:09:30.483 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:30.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.483 --rc genhtml_branch_coverage=1 00:09:30.483 --rc genhtml_function_coverage=1 00:09:30.483 --rc genhtml_legend=1 00:09:30.483 --rc geninfo_all_blocks=1 00:09:30.483 --rc geninfo_unexecuted_blocks=1 00:09:30.483 00:09:30.483 ' 00:09:30.483 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:30.483 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:30.483 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:30.483 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:30.483 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:30.483 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:30.483 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:30.483 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:30.483 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:30.483 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:30.483 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:30.483 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:30.483 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:09:30.483 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:09:30.483 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:30.483 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:30.483 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:30.483 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:30.483 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:30.483 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:30.483 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:30.483 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:30.483 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:30.483 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.483 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.483 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.483 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:30.484 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.484 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:30.484 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:30.484 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:30.484 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:30.484 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:30.484 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:30.484 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:30.484 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:30.484 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:30.484 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:30.484 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:30.743 Cannot find device "nvmf_init_br" 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:30.743 Cannot find device "nvmf_init_br2" 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:30.743 Cannot find device "nvmf_tgt_br" 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:30.743 Cannot find device "nvmf_tgt_br2" 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:30.743 Cannot find device "nvmf_init_br" 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:30.743 Cannot find device "nvmf_init_br2" 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:30.743 Cannot find device "nvmf_tgt_br" 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:30.743 Cannot find device "nvmf_tgt_br2" 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:30.743 Cannot find device "nvmf_br" 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:30.743 Cannot find device "nvmf_init_if" 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:30.743 Cannot find device "nvmf_init_if2" 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:30.743 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:30.743 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:30.743 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:31.003 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:31.003 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:31.003 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:31.003 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:31.003 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:31.003 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:31.003 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:31.003 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:31.003 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:31.003 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:31.003 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:31.003 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:31.003 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:31.003 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:31.003 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:09:31.003 00:09:31.003 --- 10.0.0.3 ping statistics --- 00:09:31.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.003 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:09:31.003 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:31.003 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:31.003 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:09:31.003 00:09:31.003 --- 10.0.0.4 ping statistics --- 00:09:31.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.003 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:09:31.003 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:31.003 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:31.003 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:09:31.003 00:09:31.003 --- 10.0.0.1 ping statistics --- 00:09:31.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.003 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:09:31.003 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:31.003 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:31.003 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:09:31.003 00:09:31.003 --- 10.0.0.2 ping statistics --- 00:09:31.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.003 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:09:31.003 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:31.003 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:09:31.003 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:31.003 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:31.003 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:31.003 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:31.003 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:31.003 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:31.003 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:31.003 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:31.003 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:31.003 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:31.003 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:31.003 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=66879 00:09:31.003 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:31.003 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 66879 00:09:31.003 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 66879 ']' 00:09:31.003 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.003 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:31.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.003 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.003 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:31.003 10:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:31.003 [2024-11-15 10:27:31.762678] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:09:31.003 [2024-11-15 10:27:31.762765] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:31.261 [2024-11-15 10:27:31.907843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:31.261 [2024-11-15 10:27:31.977716] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:31.261 [2024-11-15 10:27:31.977777] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:31.261 [2024-11-15 10:27:31.977790] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:31.261 [2024-11-15 10:27:31.977798] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:31.262 [2024-11-15 10:27:31.977805] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:31.262 [2024-11-15 10:27:31.979322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:31.262 [2024-11-15 10:27:31.979455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:31.262 [2024-11-15 10:27:31.979604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:31.262 [2024-11-15 10:27:31.979605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:31.262 [2024-11-15 10:27:32.037790] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:32.195 10:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:32.195 10:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:09:32.195 10:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:32.195 10:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:32.195 10:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:32.196 10:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:32.196 10:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:32.196 10:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.196 10:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:32.196 [2024-11-15 10:27:32.844551] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:32.196 10:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.196 10:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:32.196 10:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.196 10:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:32.196 Malloc0 00:09:32.196 10:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.196 10:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:32.196 10:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.196 10:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:32.196 10:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.196 10:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:32.196 10:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.196 10:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:32.196 10:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.196 10:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:32.196 10:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.196 10:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:32.196 [2024-11-15 10:27:32.911776] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:32.196 10:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.196 10:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:32.196 10:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:32.196 10:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:09:32.196 10:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:09:32.196 10:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:32.196 10:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:32.196 { 00:09:32.196 "params": { 00:09:32.196 "name": "Nvme$subsystem", 00:09:32.196 "trtype": "$TEST_TRANSPORT", 00:09:32.196 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:32.196 "adrfam": "ipv4", 00:09:32.196 "trsvcid": "$NVMF_PORT", 00:09:32.196 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:32.196 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:32.196 "hdgst": ${hdgst:-false}, 00:09:32.196 "ddgst": ${ddgst:-false} 00:09:32.196 }, 00:09:32.196 "method": "bdev_nvme_attach_controller" 00:09:32.196 } 00:09:32.196 EOF 00:09:32.196 )") 00:09:32.196 10:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:09:32.196 10:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:09:32.196 10:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:09:32.196 10:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:32.196 "params": { 00:09:32.196 "name": "Nvme1", 00:09:32.196 "trtype": "tcp", 00:09:32.196 "traddr": "10.0.0.3", 00:09:32.196 "adrfam": "ipv4", 00:09:32.196 "trsvcid": "4420", 00:09:32.196 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:32.196 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:32.196 "hdgst": false, 00:09:32.196 "ddgst": false 00:09:32.196 }, 00:09:32.196 "method": "bdev_nvme_attach_controller" 00:09:32.196 }' 00:09:32.196 [2024-11-15 10:27:32.966340] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:09:32.196 [2024-11-15 10:27:32.966428] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66922 ] 00:09:32.454 [2024-11-15 10:27:33.110878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:32.454 [2024-11-15 10:27:33.184729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.454 [2024-11-15 10:27:33.184637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:32.454 [2024-11-15 10:27:33.184721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:32.454 [2024-11-15 10:27:33.251813] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:32.713 I/O targets: 00:09:32.713 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:32.713 00:09:32.713 00:09:32.713 CUnit - A unit testing framework for C - Version 2.1-3 00:09:32.713 http://cunit.sourceforge.net/ 00:09:32.713 00:09:32.713 00:09:32.713 Suite: bdevio tests on: Nvme1n1 00:09:32.713 Test: blockdev write read block ...passed 00:09:32.713 Test: blockdev write zeroes read block ...passed 00:09:32.713 Test: blockdev write zeroes read no split ...passed 00:09:32.713 Test: blockdev write zeroes read split ...passed 00:09:32.713 Test: blockdev write zeroes read split partial ...passed 00:09:32.713 Test: blockdev reset ...[2024-11-15 10:27:33.408503] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:09:32.713 [2024-11-15 10:27:33.408649] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a19180 (9): Bad file descriptor 00:09:32.713 [2024-11-15 10:27:33.422244] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:09:32.713 passed 00:09:32.713 Test: blockdev write read 8 blocks ...passed 00:09:32.713 Test: blockdev write read size > 128k ...passed 00:09:32.713 Test: blockdev write read invalid size ...passed 00:09:32.713 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:32.713 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:32.713 Test: blockdev write read max offset ...passed 00:09:32.713 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:32.713 Test: blockdev writev readv 8 blocks ...passed 00:09:32.713 Test: blockdev writev readv 30 x 1block ...passed 00:09:32.713 Test: blockdev writev readv block ...passed 00:09:32.713 Test: blockdev writev readv size > 128k ...passed 00:09:32.713 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:32.713 Test: blockdev comparev and writev ...[2024-11-15 10:27:33.430143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:32.713 [2024-11-15 10:27:33.430187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:32.713 [2024-11-15 10:27:33.430208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:32.713 [2024-11-15 10:27:33.430225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:32.713 [2024-11-15 10:27:33.430694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:32.713 [2024-11-15 10:27:33.430723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:32.713 [2024-11-15 10:27:33.430742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:32.713 [2024-11-15 10:27:33.430753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:32.713 [2024-11-15 10:27:33.431237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:32.713 [2024-11-15 10:27:33.431265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:32.713 [2024-11-15 10:27:33.431283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:32.713 [2024-11-15 10:27:33.431295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:32.713 [2024-11-15 10:27:33.431667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:32.713 [2024-11-15 10:27:33.431705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:32.713 [2024-11-15 10:27:33.431724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:32.713 [2024-11-15 10:27:33.431736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:32.713 passed 00:09:32.713 Test: blockdev nvme passthru rw ...passed 00:09:32.713 Test: blockdev nvme passthru vendor specific ...[2024-11-15 10:27:33.432558] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:32.713 [2024-11-15 10:27:33.432584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:32.713 [2024-11-15 10:27:33.432695] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:32.713 [2024-11-15 10:27:33.432717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:32.713 [2024-11-15 10:27:33.432825] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:32.713 [2024-11-15 10:27:33.432854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:32.713 [2024-11-15 10:27:33.432955] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:32.713 [2024-11-15 10:27:33.432972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:32.713 passed 00:09:32.713 Test: blockdev nvme admin passthru ...passed 00:09:32.713 Test: blockdev copy ...passed 00:09:32.713 00:09:32.713 Run Summary: Type Total Ran Passed Failed Inactive 00:09:32.713 suites 1 1 n/a 0 0 00:09:32.713 tests 23 23 23 0 0 00:09:32.713 asserts 152 152 152 0 n/a 00:09:32.713 00:09:32.713 Elapsed time = 0.152 seconds 00:09:32.972 10:27:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:32.972 10:27:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.972 10:27:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:32.972 10:27:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.972 10:27:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:32.972 10:27:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:32.972 10:27:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:32.972 10:27:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:09:32.972 10:27:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:32.972 10:27:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:09:32.972 10:27:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:32.972 10:27:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:32.972 rmmod nvme_tcp 00:09:32.972 rmmod nvme_fabrics 00:09:32.972 rmmod nvme_keyring 00:09:32.972 10:27:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:32.972 10:27:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:09:32.972 10:27:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:09:32.972 10:27:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 66879 ']' 00:09:32.972 10:27:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 66879 00:09:32.972 10:27:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 66879 ']' 00:09:32.972 10:27:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 66879 00:09:32.972 10:27:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:09:32.972 10:27:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:32.972 10:27:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 66879 00:09:32.972 10:27:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:09:32.972 10:27:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:09:32.972 killing process with pid 66879 00:09:32.972 10:27:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 66879' 00:09:32.972 10:27:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 66879 00:09:32.972 10:27:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 66879 00:09:33.231 10:27:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:33.231 10:27:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:33.231 10:27:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:33.231 10:27:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:09:33.231 10:27:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:09:33.231 10:27:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:33.231 10:27:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:09:33.231 10:27:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:33.231 10:27:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:33.231 10:27:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:33.231 10:27:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:33.231 10:27:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:33.231 10:27:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:33.231 10:27:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:33.231 10:27:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:33.488 10:27:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:33.488 10:27:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:33.488 10:27:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:33.488 10:27:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:33.488 10:27:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:33.488 10:27:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:33.488 10:27:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:33.488 10:27:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:33.488 10:27:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.488 10:27:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:33.488 10:27:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.488 10:27:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:09:33.488 00:09:33.488 real 0m3.130s 00:09:33.488 user 0m9.637s 00:09:33.488 sys 0m0.845s 00:09:33.488 10:27:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:33.488 10:27:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:33.488 ************************************ 00:09:33.488 END TEST nvmf_bdevio 00:09:33.489 ************************************ 00:09:33.489 10:27:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:33.489 00:09:33.489 real 2m36.771s 00:09:33.489 user 6m55.929s 00:09:33.489 sys 0m51.755s 00:09:33.489 10:27:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:33.489 10:27:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:33.489 ************************************ 00:09:33.489 END TEST nvmf_target_core 00:09:33.489 ************************************ 00:09:33.489 10:27:34 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:33.489 10:27:34 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:33.489 10:27:34 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:33.489 10:27:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:33.489 ************************************ 00:09:33.489 START TEST nvmf_target_extra 00:09:33.489 ************************************ 00:09:33.489 10:27:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:33.747 * Looking for test storage... 00:09:33.747 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:09:33.747 10:27:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:33.747 10:27:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:09:33.747 10:27:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:33.747 10:27:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:33.747 10:27:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:33.747 10:27:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:33.747 10:27:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:33.747 10:27:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:09:33.747 10:27:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:09:33.747 10:27:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:09:33.747 10:27:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:09:33.747 10:27:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:09:33.747 10:27:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:09:33.747 10:27:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:09:33.747 10:27:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:33.747 10:27:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:09:33.747 10:27:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:09:33.747 10:27:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:33.747 10:27:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:33.747 10:27:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:09:33.747 10:27:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:09:33.747 10:27:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:33.747 10:27:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:09:33.747 10:27:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:09:33.747 10:27:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:09:33.747 10:27:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:09:33.747 10:27:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:33.747 10:27:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:09:33.747 10:27:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:09:33.747 10:27:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:33.747 10:27:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:33.747 10:27:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:09:33.747 10:27:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:33.747 10:27:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:33.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.747 --rc genhtml_branch_coverage=1 00:09:33.747 --rc genhtml_function_coverage=1 00:09:33.747 --rc genhtml_legend=1 00:09:33.747 --rc geninfo_all_blocks=1 00:09:33.747 --rc geninfo_unexecuted_blocks=1 00:09:33.747 00:09:33.747 ' 00:09:33.747 10:27:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:33.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.747 --rc genhtml_branch_coverage=1 00:09:33.747 --rc genhtml_function_coverage=1 00:09:33.747 --rc genhtml_legend=1 00:09:33.747 --rc geninfo_all_blocks=1 00:09:33.747 --rc geninfo_unexecuted_blocks=1 00:09:33.747 00:09:33.747 ' 00:09:33.747 10:27:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:33.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.747 --rc genhtml_branch_coverage=1 00:09:33.747 --rc genhtml_function_coverage=1 00:09:33.747 --rc genhtml_legend=1 00:09:33.747 --rc geninfo_all_blocks=1 00:09:33.747 --rc geninfo_unexecuted_blocks=1 00:09:33.747 00:09:33.747 ' 00:09:33.747 10:27:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:33.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.747 --rc genhtml_branch_coverage=1 00:09:33.747 --rc genhtml_function_coverage=1 00:09:33.747 --rc genhtml_legend=1 00:09:33.747 --rc geninfo_all_blocks=1 00:09:33.747 --rc geninfo_unexecuted_blocks=1 00:09:33.747 00:09:33.747 ' 00:09:33.747 10:27:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:33.747 10:27:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:33.747 10:27:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:33.747 10:27:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:33.747 10:27:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:33.747 10:27:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:33.747 10:27:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:33.747 10:27:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:33.747 10:27:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:33.747 10:27:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:33.747 10:27:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:33.747 10:27:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:33.747 10:27:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:09:33.747 10:27:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:09:33.747 10:27:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:33.747 10:27:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:33.747 10:27:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:33.747 10:27:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:33.747 10:27:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:33.747 10:27:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:09:33.747 10:27:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:33.747 10:27:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:33.747 10:27:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:33.748 10:27:34 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.748 10:27:34 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.748 10:27:34 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.748 10:27:34 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:33.748 10:27:34 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.748 10:27:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:09:33.748 10:27:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:33.748 10:27:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:33.748 10:27:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:33.748 10:27:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:33.748 10:27:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:33.748 10:27:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:33.748 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:33.748 10:27:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:33.748 10:27:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:33.748 10:27:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:33.748 10:27:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:33.748 10:27:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:33.748 10:27:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:09:33.748 10:27:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:09:33.748 10:27:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:33.748 10:27:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:33.748 10:27:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:33.748 ************************************ 00:09:33.748 START TEST nvmf_auth_target 00:09:33.748 ************************************ 00:09:33.748 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:09:34.007 * Looking for test storage... 00:09:34.007 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:34.007 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:34.007 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:09:34.007 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:34.007 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:34.007 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:34.007 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:34.007 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:34.007 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:34.007 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:34.007 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:34.007 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:34.007 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:34.007 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:34.007 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:34.007 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:34.007 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:09:34.007 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:09:34.007 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:34.007 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:34.007 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:09:34.007 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:09:34.007 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:34.007 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:09:34.007 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:34.007 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:09:34.007 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:09:34.007 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:34.007 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:09:34.007 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:34.007 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:34.007 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:34.007 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:09:34.007 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:34.007 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:34.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.007 --rc genhtml_branch_coverage=1 00:09:34.007 --rc genhtml_function_coverage=1 00:09:34.007 --rc genhtml_legend=1 00:09:34.007 --rc geninfo_all_blocks=1 00:09:34.007 --rc geninfo_unexecuted_blocks=1 00:09:34.007 00:09:34.007 ' 00:09:34.007 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:34.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.007 --rc genhtml_branch_coverage=1 00:09:34.007 --rc genhtml_function_coverage=1 00:09:34.007 --rc genhtml_legend=1 00:09:34.007 --rc geninfo_all_blocks=1 00:09:34.007 --rc geninfo_unexecuted_blocks=1 00:09:34.007 00:09:34.007 ' 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:34.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.008 --rc genhtml_branch_coverage=1 00:09:34.008 --rc genhtml_function_coverage=1 00:09:34.008 --rc genhtml_legend=1 00:09:34.008 --rc geninfo_all_blocks=1 00:09:34.008 --rc geninfo_unexecuted_blocks=1 00:09:34.008 00:09:34.008 ' 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:34.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.008 --rc genhtml_branch_coverage=1 00:09:34.008 --rc genhtml_function_coverage=1 00:09:34.008 --rc genhtml_legend=1 00:09:34.008 --rc geninfo_all_blocks=1 00:09:34.008 --rc geninfo_unexecuted_blocks=1 00:09:34.008 00:09:34.008 ' 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:34.008 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:34.008 Cannot find device "nvmf_init_br" 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:34.008 Cannot find device "nvmf_init_br2" 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:09:34.008 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:34.008 Cannot find device "nvmf_tgt_br" 00:09:34.009 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:09:34.009 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:34.009 Cannot find device "nvmf_tgt_br2" 00:09:34.009 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:09:34.009 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:34.268 Cannot find device "nvmf_init_br" 00:09:34.268 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:09:34.268 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:34.268 Cannot find device "nvmf_init_br2" 00:09:34.268 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:09:34.268 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:34.268 Cannot find device "nvmf_tgt_br" 00:09:34.268 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:09:34.268 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:34.268 Cannot find device "nvmf_tgt_br2" 00:09:34.268 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:09:34.268 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:34.268 Cannot find device "nvmf_br" 00:09:34.268 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:09:34.268 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:34.268 Cannot find device "nvmf_init_if" 00:09:34.268 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:09:34.268 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:34.268 Cannot find device "nvmf_init_if2" 00:09:34.268 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:09:34.268 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:34.268 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:34.268 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:09:34.268 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:34.268 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:34.268 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:09:34.268 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:34.268 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:34.268 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:34.268 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:34.268 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:34.268 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:34.268 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:34.268 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:34.268 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:34.268 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:34.268 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:34.268 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:34.268 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:34.268 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:34.268 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:34.268 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:34.268 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:34.268 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:34.268 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:34.268 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:34.268 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:34.268 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:34.268 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:34.268 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:34.268 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:34.529 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:34.529 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:34.529 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:34.529 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:34.529 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:34.529 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:34.529 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:34.529 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:34.529 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:34.529 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:09:34.529 00:09:34.529 --- 10.0.0.3 ping statistics --- 00:09:34.529 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.529 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:09:34.529 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:34.529 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:34.529 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.036 ms 00:09:34.529 00:09:34.529 --- 10.0.0.4 ping statistics --- 00:09:34.529 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.529 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:09:34.529 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:34.529 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:34.529 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:09:34.529 00:09:34.529 --- 10.0.0.1 ping statistics --- 00:09:34.529 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.529 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:09:34.529 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:34.529 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:34.529 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:09:34.529 00:09:34.529 --- 10.0.0.2 ping statistics --- 00:09:34.529 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.529 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:09:34.529 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:34.529 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:09:34.529 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:34.529 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:34.529 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:34.529 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:34.529 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:34.529 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:34.529 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:34.529 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:09:34.529 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:34.529 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:34.529 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:34.529 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=67206 00:09:34.529 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:09:34.529 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 67206 00:09:34.529 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 67206 ']' 00:09:34.529 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.529 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:34.529 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.529 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:34.529 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:35.466 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:35.466 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:09:35.466 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:35.466 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:35.724 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:35.724 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:35.724 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=67238 00:09:35.724 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:09:35.724 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:09:35.724 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:09:35.724 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:35.724 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:35.724 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:35.724 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:09:35.724 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:09:35.724 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:09:35.724 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=8e0714c679f13adb93077cddab4505305839a993867b475c 00:09:35.724 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:09:35.724 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.vHx 00:09:35.724 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 8e0714c679f13adb93077cddab4505305839a993867b475c 0 00:09:35.724 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 8e0714c679f13adb93077cddab4505305839a993867b475c 0 00:09:35.724 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:35.724 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:35.724 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=8e0714c679f13adb93077cddab4505305839a993867b475c 00:09:35.724 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:09:35.724 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:35.724 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.vHx 00:09:35.724 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.vHx 00:09:35.724 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.vHx 00:09:35.724 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:09:35.724 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:35.724 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:35.724 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:35.724 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:09:35.725 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:09:35.725 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:09:35.725 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=131feecdaa5c9a0d5094ca2f797e7d72b3b53d01ab7761bedf312ba9be60a54e 00:09:35.725 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:09:35.725 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.AKl 00:09:35.725 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 131feecdaa5c9a0d5094ca2f797e7d72b3b53d01ab7761bedf312ba9be60a54e 3 00:09:35.725 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 131feecdaa5c9a0d5094ca2f797e7d72b3b53d01ab7761bedf312ba9be60a54e 3 00:09:35.725 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:35.725 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:35.725 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=131feecdaa5c9a0d5094ca2f797e7d72b3b53d01ab7761bedf312ba9be60a54e 00:09:35.725 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:09:35.725 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:35.725 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.AKl 00:09:35.725 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.AKl 00:09:35.725 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.AKl 00:09:35.725 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:09:35.725 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:35.725 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:35.725 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:35.725 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:09:35.725 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:09:35.725 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:09:35.725 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a851b2cbcf7c5a8e2e0b4295b71d33c1 00:09:35.725 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:09:35.725 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.DNk 00:09:35.725 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a851b2cbcf7c5a8e2e0b4295b71d33c1 1 00:09:35.725 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a851b2cbcf7c5a8e2e0b4295b71d33c1 1 00:09:35.725 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:35.725 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:35.725 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a851b2cbcf7c5a8e2e0b4295b71d33c1 00:09:35.725 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:09:35.725 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:35.725 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.DNk 00:09:35.984 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.DNk 00:09:35.984 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.DNk 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=9c44a645d773de2780fd793aa0d6b6284f43675969c602c2 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Wf5 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 9c44a645d773de2780fd793aa0d6b6284f43675969c602c2 2 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 9c44a645d773de2780fd793aa0d6b6284f43675969c602c2 2 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=9c44a645d773de2780fd793aa0d6b6284f43675969c602c2 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Wf5 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Wf5 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.Wf5 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=49d10b23f1ba0b523d117aa48560bb5e082474602b356bcb 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.UWJ 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 49d10b23f1ba0b523d117aa48560bb5e082474602b356bcb 2 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 49d10b23f1ba0b523d117aa48560bb5e082474602b356bcb 2 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=49d10b23f1ba0b523d117aa48560bb5e082474602b356bcb 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.UWJ 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.UWJ 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.UWJ 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=250ca68b1fca4283796ffa1889b004d8 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.ygn 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 250ca68b1fca4283796ffa1889b004d8 1 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 250ca68b1fca4283796ffa1889b004d8 1 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=250ca68b1fca4283796ffa1889b004d8 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.ygn 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.ygn 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.ygn 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=543fb8e13d38542daff16bbd86b30dc4fe201a344d162af8819f3bc9a3cb96a1 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.dvM 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 543fb8e13d38542daff16bbd86b30dc4fe201a344d162af8819f3bc9a3cb96a1 3 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 543fb8e13d38542daff16bbd86b30dc4fe201a344d162af8819f3bc9a3cb96a1 3 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=543fb8e13d38542daff16bbd86b30dc4fe201a344d162af8819f3bc9a3cb96a1 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:35.985 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.dvM 00:09:36.244 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.dvM 00:09:36.244 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.dvM 00:09:36.244 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:09:36.244 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 67206 00:09:36.244 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 67206 ']' 00:09:36.244 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:36.244 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:36.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:36.244 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:36.244 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:36.244 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:36.502 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:36.502 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:09:36.502 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 67238 /var/tmp/host.sock 00:09:36.502 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 67238 ']' 00:09:36.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:09:36.502 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:09:36.502 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:36.502 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:09:36.502 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:36.502 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:36.761 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:36.761 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:09:36.761 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:09:36.761 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.761 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:36.761 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.761 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:09:36.761 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.vHx 00:09:36.761 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.761 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:36.761 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.761 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.vHx 00:09:36.761 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.vHx 00:09:37.020 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.AKl ]] 00:09:37.020 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.AKl 00:09:37.020 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.020 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:37.020 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.020 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.AKl 00:09:37.020 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.AKl 00:09:37.280 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:09:37.280 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.DNk 00:09:37.280 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.280 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:37.539 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.539 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.DNk 00:09:37.539 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.DNk 00:09:37.539 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.Wf5 ]] 00:09:37.539 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Wf5 00:09:37.539 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.539 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:37.798 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.798 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Wf5 00:09:37.798 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Wf5 00:09:38.057 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:09:38.057 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.UWJ 00:09:38.057 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.057 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:38.057 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.058 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.UWJ 00:09:38.058 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.UWJ 00:09:38.317 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.ygn ]] 00:09:38.317 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ygn 00:09:38.317 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.317 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:38.317 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.317 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ygn 00:09:38.317 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ygn 00:09:38.575 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:09:38.575 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.dvM 00:09:38.575 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.575 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:38.575 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.575 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.dvM 00:09:38.575 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.dvM 00:09:38.834 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:09:38.834 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:09:38.834 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:09:38.834 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:38.834 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:38.834 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:39.093 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:09:39.093 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:39.093 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:39.093 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:39.093 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:09:39.093 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:39.093 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:39.093 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.093 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:39.093 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.093 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:39.093 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:39.093 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:39.359 00:09:39.359 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:39.359 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:39.359 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:39.619 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:39.619 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:39.619 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.619 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:39.619 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.619 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:39.619 { 00:09:39.619 "cntlid": 1, 00:09:39.619 "qid": 0, 00:09:39.619 "state": "enabled", 00:09:39.619 "thread": "nvmf_tgt_poll_group_000", 00:09:39.619 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:09:39.619 "listen_address": { 00:09:39.619 "trtype": "TCP", 00:09:39.619 "adrfam": "IPv4", 00:09:39.619 "traddr": "10.0.0.3", 00:09:39.619 "trsvcid": "4420" 00:09:39.619 }, 00:09:39.619 "peer_address": { 00:09:39.619 "trtype": "TCP", 00:09:39.619 "adrfam": "IPv4", 00:09:39.619 "traddr": "10.0.0.1", 00:09:39.619 "trsvcid": "48198" 00:09:39.619 }, 00:09:39.619 "auth": { 00:09:39.619 "state": "completed", 00:09:39.619 "digest": "sha256", 00:09:39.619 "dhgroup": "null" 00:09:39.619 } 00:09:39.619 } 00:09:39.619 ]' 00:09:39.619 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:39.878 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:39.878 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:39.878 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:39.878 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:39.878 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:39.878 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:39.878 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:40.137 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGUwNzE0YzY3OWYxM2FkYjkzMDc3Y2RkYWI0NTA1MzA1ODM5YTk5Mzg2N2I0NzVjpwCJUQ==: --dhchap-ctrl-secret DHHC-1:03:MTMxZmVlY2RhYTVjOWEwZDUwOTRjYTJmNzk3ZTdkNzJiM2I1M2QwMWFiNzc2MWJlZGYzMTJiYTliZTYwYTU0ZSb4+Oc=: 00:09:40.137 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:00:OGUwNzE0YzY3OWYxM2FkYjkzMDc3Y2RkYWI0NTA1MzA1ODM5YTk5Mzg2N2I0NzVjpwCJUQ==: --dhchap-ctrl-secret DHHC-1:03:MTMxZmVlY2RhYTVjOWEwZDUwOTRjYTJmNzk3ZTdkNzJiM2I1M2QwMWFiNzc2MWJlZGYzMTJiYTliZTYwYTU0ZSb4+Oc=: 00:09:45.409 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:45.409 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:45.409 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:09:45.409 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.409 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:45.409 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.409 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:45.409 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:45.409 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:45.409 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:09:45.409 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:45.409 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:45.409 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:45.409 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:09:45.409 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:45.409 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:45.409 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.409 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:45.409 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.409 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:45.409 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:45.409 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:45.409 00:09:45.409 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:45.409 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:45.409 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:45.409 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:45.409 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:45.409 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.409 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:45.409 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.409 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:45.409 { 00:09:45.409 "cntlid": 3, 00:09:45.409 "qid": 0, 00:09:45.409 "state": "enabled", 00:09:45.409 "thread": "nvmf_tgt_poll_group_000", 00:09:45.409 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:09:45.409 "listen_address": { 00:09:45.409 "trtype": "TCP", 00:09:45.409 "adrfam": "IPv4", 00:09:45.409 "traddr": "10.0.0.3", 00:09:45.409 "trsvcid": "4420" 00:09:45.409 }, 00:09:45.409 "peer_address": { 00:09:45.409 "trtype": "TCP", 00:09:45.409 "adrfam": "IPv4", 00:09:45.409 "traddr": "10.0.0.1", 00:09:45.409 "trsvcid": "48228" 00:09:45.409 }, 00:09:45.409 "auth": { 00:09:45.409 "state": "completed", 00:09:45.409 "digest": "sha256", 00:09:45.409 "dhgroup": "null" 00:09:45.409 } 00:09:45.409 } 00:09:45.409 ]' 00:09:45.409 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:45.670 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:45.670 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:45.670 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:45.670 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:45.670 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:45.670 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:45.670 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:45.929 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTg1MWIyY2JjZjdjNWE4ZTJlMGI0Mjk1YjcxZDMzYzFw9kV5: --dhchap-ctrl-secret DHHC-1:02:OWM0NGE2NDVkNzczZGUyNzgwZmQ3OTNhYTBkNmI2Mjg0ZjQzNjc1OTY5YzYwMmMyyPVorw==: 00:09:45.930 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:01:YTg1MWIyY2JjZjdjNWE4ZTJlMGI0Mjk1YjcxZDMzYzFw9kV5: --dhchap-ctrl-secret DHHC-1:02:OWM0NGE2NDVkNzczZGUyNzgwZmQ3OTNhYTBkNmI2Mjg0ZjQzNjc1OTY5YzYwMmMyyPVorw==: 00:09:46.496 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:46.496 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:46.496 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:09:46.496 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.496 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:46.496 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.496 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:46.496 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:46.496 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:46.754 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:09:46.754 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:46.754 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:46.754 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:46.754 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:09:46.754 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:46.755 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:46.755 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.755 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:47.013 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.013 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:47.013 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:47.013 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:47.272 00:09:47.272 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:47.272 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:47.272 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:47.841 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:47.841 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:47.841 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.841 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:47.841 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.841 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:47.841 { 00:09:47.841 "cntlid": 5, 00:09:47.841 "qid": 0, 00:09:47.841 "state": "enabled", 00:09:47.841 "thread": "nvmf_tgt_poll_group_000", 00:09:47.841 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:09:47.841 "listen_address": { 00:09:47.841 "trtype": "TCP", 00:09:47.841 "adrfam": "IPv4", 00:09:47.841 "traddr": "10.0.0.3", 00:09:47.841 "trsvcid": "4420" 00:09:47.841 }, 00:09:47.841 "peer_address": { 00:09:47.841 "trtype": "TCP", 00:09:47.841 "adrfam": "IPv4", 00:09:47.841 "traddr": "10.0.0.1", 00:09:47.841 "trsvcid": "48246" 00:09:47.841 }, 00:09:47.841 "auth": { 00:09:47.841 "state": "completed", 00:09:47.841 "digest": "sha256", 00:09:47.841 "dhgroup": "null" 00:09:47.841 } 00:09:47.841 } 00:09:47.841 ]' 00:09:47.841 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:47.841 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:47.841 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:47.841 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:47.841 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:47.841 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:47.841 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:47.841 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:48.100 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDlkMTBiMjNmMWJhMGI1MjNkMTE3YWE0ODU2MGJiNWUwODI0NzQ2MDJiMzU2YmNiHwH+NA==: --dhchap-ctrl-secret DHHC-1:01:MjUwY2E2OGIxZmNhNDI4Mzc5NmZmYTE4ODliMDA0ZDicMihN: 00:09:48.100 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:02:NDlkMTBiMjNmMWJhMGI1MjNkMTE3YWE0ODU2MGJiNWUwODI0NzQ2MDJiMzU2YmNiHwH+NA==: --dhchap-ctrl-secret DHHC-1:01:MjUwY2E2OGIxZmNhNDI4Mzc5NmZmYTE4ODliMDA0ZDicMihN: 00:09:49.037 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:49.037 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:49.037 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:09:49.037 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.037 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:49.037 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.037 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:49.037 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:49.037 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:49.296 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:09:49.296 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:49.296 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:49.296 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:49.296 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:09:49.296 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:49.296 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key3 00:09:49.296 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.296 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:49.296 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.296 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:09:49.296 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:49.296 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:49.555 00:09:49.555 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:49.555 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:49.555 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:49.814 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:49.814 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:49.814 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.814 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:50.073 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.073 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:50.073 { 00:09:50.073 "cntlid": 7, 00:09:50.073 "qid": 0, 00:09:50.073 "state": "enabled", 00:09:50.073 "thread": "nvmf_tgt_poll_group_000", 00:09:50.073 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:09:50.073 "listen_address": { 00:09:50.073 "trtype": "TCP", 00:09:50.073 "adrfam": "IPv4", 00:09:50.073 "traddr": "10.0.0.3", 00:09:50.073 "trsvcid": "4420" 00:09:50.073 }, 00:09:50.073 "peer_address": { 00:09:50.073 "trtype": "TCP", 00:09:50.073 "adrfam": "IPv4", 00:09:50.073 "traddr": "10.0.0.1", 00:09:50.073 "trsvcid": "56282" 00:09:50.073 }, 00:09:50.073 "auth": { 00:09:50.073 "state": "completed", 00:09:50.073 "digest": "sha256", 00:09:50.073 "dhgroup": "null" 00:09:50.073 } 00:09:50.073 } 00:09:50.073 ]' 00:09:50.073 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:50.073 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:50.073 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:50.073 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:50.073 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:50.073 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:50.073 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:50.073 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:50.332 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTQzZmI4ZTEzZDM4NTQyZGFmZjE2YmJkODZiMzBkYzRmZTIwMWEzNDRkMTYyYWY4ODE5ZjNiYzlhM2NiOTZhMRbjLRI=: 00:09:50.332 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:03:NTQzZmI4ZTEzZDM4NTQyZGFmZjE2YmJkODZiMzBkYzRmZTIwMWEzNDRkMTYyYWY4ODE5ZjNiYzlhM2NiOTZhMRbjLRI=: 00:09:51.267 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:51.267 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:51.267 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:09:51.267 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.267 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:51.267 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.267 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:09:51.267 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:51.267 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:51.267 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:51.526 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:09:51.526 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:51.526 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:51.526 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:09:51.526 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:09:51.526 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:51.526 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:51.526 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.526 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:51.526 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.526 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:51.526 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:51.526 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:51.785 00:09:51.785 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:51.785 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:51.785 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:52.044 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:52.044 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:52.044 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.044 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:52.044 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.044 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:52.044 { 00:09:52.044 "cntlid": 9, 00:09:52.044 "qid": 0, 00:09:52.044 "state": "enabled", 00:09:52.044 "thread": "nvmf_tgt_poll_group_000", 00:09:52.044 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:09:52.044 "listen_address": { 00:09:52.044 "trtype": "TCP", 00:09:52.044 "adrfam": "IPv4", 00:09:52.044 "traddr": "10.0.0.3", 00:09:52.044 "trsvcid": "4420" 00:09:52.044 }, 00:09:52.044 "peer_address": { 00:09:52.044 "trtype": "TCP", 00:09:52.044 "adrfam": "IPv4", 00:09:52.044 "traddr": "10.0.0.1", 00:09:52.044 "trsvcid": "56292" 00:09:52.044 }, 00:09:52.044 "auth": { 00:09:52.044 "state": "completed", 00:09:52.044 "digest": "sha256", 00:09:52.044 "dhgroup": "ffdhe2048" 00:09:52.044 } 00:09:52.044 } 00:09:52.044 ]' 00:09:52.044 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:52.302 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:52.302 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:52.302 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:52.302 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:52.302 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:52.302 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:52.302 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:52.560 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGUwNzE0YzY3OWYxM2FkYjkzMDc3Y2RkYWI0NTA1MzA1ODM5YTk5Mzg2N2I0NzVjpwCJUQ==: --dhchap-ctrl-secret DHHC-1:03:MTMxZmVlY2RhYTVjOWEwZDUwOTRjYTJmNzk3ZTdkNzJiM2I1M2QwMWFiNzc2MWJlZGYzMTJiYTliZTYwYTU0ZSb4+Oc=: 00:09:52.560 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:00:OGUwNzE0YzY3OWYxM2FkYjkzMDc3Y2RkYWI0NTA1MzA1ODM5YTk5Mzg2N2I0NzVjpwCJUQ==: --dhchap-ctrl-secret DHHC-1:03:MTMxZmVlY2RhYTVjOWEwZDUwOTRjYTJmNzk3ZTdkNzJiM2I1M2QwMWFiNzc2MWJlZGYzMTJiYTliZTYwYTU0ZSb4+Oc=: 00:09:53.533 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:53.533 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:53.533 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:09:53.534 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.534 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:53.534 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.534 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:53.534 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:53.534 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:53.534 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:09:53.534 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:53.534 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:53.534 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:09:53.534 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:09:53.534 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:53.534 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:53.534 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.534 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:53.534 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.534 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:53.534 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:53.534 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:53.814 00:09:53.814 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:53.814 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:53.814 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:54.386 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:54.386 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:54.386 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.386 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:54.386 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.386 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:54.386 { 00:09:54.386 "cntlid": 11, 00:09:54.386 "qid": 0, 00:09:54.386 "state": "enabled", 00:09:54.386 "thread": "nvmf_tgt_poll_group_000", 00:09:54.386 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:09:54.386 "listen_address": { 00:09:54.386 "trtype": "TCP", 00:09:54.386 "adrfam": "IPv4", 00:09:54.386 "traddr": "10.0.0.3", 00:09:54.386 "trsvcid": "4420" 00:09:54.386 }, 00:09:54.386 "peer_address": { 00:09:54.386 "trtype": "TCP", 00:09:54.386 "adrfam": "IPv4", 00:09:54.386 "traddr": "10.0.0.1", 00:09:54.386 "trsvcid": "56328" 00:09:54.386 }, 00:09:54.386 "auth": { 00:09:54.386 "state": "completed", 00:09:54.386 "digest": "sha256", 00:09:54.386 "dhgroup": "ffdhe2048" 00:09:54.386 } 00:09:54.386 } 00:09:54.386 ]' 00:09:54.386 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:54.386 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:54.386 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:54.386 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:54.386 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:54.386 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:54.386 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:54.386 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:54.645 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTg1MWIyY2JjZjdjNWE4ZTJlMGI0Mjk1YjcxZDMzYzFw9kV5: --dhchap-ctrl-secret DHHC-1:02:OWM0NGE2NDVkNzczZGUyNzgwZmQ3OTNhYTBkNmI2Mjg0ZjQzNjc1OTY5YzYwMmMyyPVorw==: 00:09:54.645 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:01:YTg1MWIyY2JjZjdjNWE4ZTJlMGI0Mjk1YjcxZDMzYzFw9kV5: --dhchap-ctrl-secret DHHC-1:02:OWM0NGE2NDVkNzczZGUyNzgwZmQ3OTNhYTBkNmI2Mjg0ZjQzNjc1OTY5YzYwMmMyyPVorw==: 00:09:55.582 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:55.582 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:55.582 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:09:55.582 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.582 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:55.582 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.582 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:55.582 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:55.582 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:55.582 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:09:55.582 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:55.582 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:55.582 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:09:55.582 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:09:55.582 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:55.582 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:55.583 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.583 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:55.583 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.583 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:55.583 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:55.583 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:56.150 00:09:56.150 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:56.150 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:56.150 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:56.409 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:56.409 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:56.409 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.409 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:56.409 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.409 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:56.409 { 00:09:56.409 "cntlid": 13, 00:09:56.409 "qid": 0, 00:09:56.409 "state": "enabled", 00:09:56.409 "thread": "nvmf_tgt_poll_group_000", 00:09:56.409 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:09:56.409 "listen_address": { 00:09:56.409 "trtype": "TCP", 00:09:56.409 "adrfam": "IPv4", 00:09:56.409 "traddr": "10.0.0.3", 00:09:56.409 "trsvcid": "4420" 00:09:56.409 }, 00:09:56.409 "peer_address": { 00:09:56.409 "trtype": "TCP", 00:09:56.409 "adrfam": "IPv4", 00:09:56.409 "traddr": "10.0.0.1", 00:09:56.409 "trsvcid": "56354" 00:09:56.409 }, 00:09:56.409 "auth": { 00:09:56.409 "state": "completed", 00:09:56.409 "digest": "sha256", 00:09:56.409 "dhgroup": "ffdhe2048" 00:09:56.409 } 00:09:56.409 } 00:09:56.409 ]' 00:09:56.409 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:56.409 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:56.409 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:56.409 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:56.409 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:56.409 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:56.409 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:56.409 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:56.668 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDlkMTBiMjNmMWJhMGI1MjNkMTE3YWE0ODU2MGJiNWUwODI0NzQ2MDJiMzU2YmNiHwH+NA==: --dhchap-ctrl-secret DHHC-1:01:MjUwY2E2OGIxZmNhNDI4Mzc5NmZmYTE4ODliMDA0ZDicMihN: 00:09:56.668 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:02:NDlkMTBiMjNmMWJhMGI1MjNkMTE3YWE0ODU2MGJiNWUwODI0NzQ2MDJiMzU2YmNiHwH+NA==: --dhchap-ctrl-secret DHHC-1:01:MjUwY2E2OGIxZmNhNDI4Mzc5NmZmYTE4ODliMDA0ZDicMihN: 00:09:57.605 10:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:57.605 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:57.605 10:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:09:57.605 10:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.605 10:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:57.605 10:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.605 10:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:57.605 10:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:57.605 10:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:57.605 10:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:09:57.605 10:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:57.605 10:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:57.605 10:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:09:57.605 10:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:09:57.605 10:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:57.605 10:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key3 00:09:57.605 10:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.605 10:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:57.605 10:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.605 10:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:09:57.605 10:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:57.605 10:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:58.216 00:09:58.216 10:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:58.216 10:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:58.216 10:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:58.475 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:58.475 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:58.475 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.475 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:58.475 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.475 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:58.475 { 00:09:58.475 "cntlid": 15, 00:09:58.475 "qid": 0, 00:09:58.475 "state": "enabled", 00:09:58.475 "thread": "nvmf_tgt_poll_group_000", 00:09:58.475 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:09:58.475 "listen_address": { 00:09:58.475 "trtype": "TCP", 00:09:58.475 "adrfam": "IPv4", 00:09:58.475 "traddr": "10.0.0.3", 00:09:58.475 "trsvcid": "4420" 00:09:58.475 }, 00:09:58.475 "peer_address": { 00:09:58.475 "trtype": "TCP", 00:09:58.475 "adrfam": "IPv4", 00:09:58.475 "traddr": "10.0.0.1", 00:09:58.475 "trsvcid": "53384" 00:09:58.475 }, 00:09:58.475 "auth": { 00:09:58.475 "state": "completed", 00:09:58.475 "digest": "sha256", 00:09:58.475 "dhgroup": "ffdhe2048" 00:09:58.475 } 00:09:58.475 } 00:09:58.475 ]' 00:09:58.475 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:58.475 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:58.475 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:58.475 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:58.475 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:58.475 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:58.475 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:58.475 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:59.044 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTQzZmI4ZTEzZDM4NTQyZGFmZjE2YmJkODZiMzBkYzRmZTIwMWEzNDRkMTYyYWY4ODE5ZjNiYzlhM2NiOTZhMRbjLRI=: 00:09:59.044 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:03:NTQzZmI4ZTEzZDM4NTQyZGFmZjE2YmJkODZiMzBkYzRmZTIwMWEzNDRkMTYyYWY4ODE5ZjNiYzlhM2NiOTZhMRbjLRI=: 00:09:59.611 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:59.611 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:59.611 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:09:59.611 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.611 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:59.611 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.611 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:09:59.611 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:59.611 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:59.611 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:59.870 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:09:59.870 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:59.870 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:59.870 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:09:59.870 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:09:59.870 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:59.870 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:59.870 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.870 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:59.870 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.870 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:59.870 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:59.870 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:00.129 00:10:00.388 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:00.388 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:00.388 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:00.647 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:00.647 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:00.647 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.647 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:00.647 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.647 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:00.647 { 00:10:00.647 "cntlid": 17, 00:10:00.647 "qid": 0, 00:10:00.647 "state": "enabled", 00:10:00.647 "thread": "nvmf_tgt_poll_group_000", 00:10:00.647 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:10:00.647 "listen_address": { 00:10:00.647 "trtype": "TCP", 00:10:00.647 "adrfam": "IPv4", 00:10:00.647 "traddr": "10.0.0.3", 00:10:00.647 "trsvcid": "4420" 00:10:00.647 }, 00:10:00.647 "peer_address": { 00:10:00.647 "trtype": "TCP", 00:10:00.647 "adrfam": "IPv4", 00:10:00.647 "traddr": "10.0.0.1", 00:10:00.647 "trsvcid": "53412" 00:10:00.647 }, 00:10:00.647 "auth": { 00:10:00.647 "state": "completed", 00:10:00.647 "digest": "sha256", 00:10:00.647 "dhgroup": "ffdhe3072" 00:10:00.647 } 00:10:00.647 } 00:10:00.647 ]' 00:10:00.647 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:00.647 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:00.647 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:00.647 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:00.647 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:00.906 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:00.906 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:00.906 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:01.165 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGUwNzE0YzY3OWYxM2FkYjkzMDc3Y2RkYWI0NTA1MzA1ODM5YTk5Mzg2N2I0NzVjpwCJUQ==: --dhchap-ctrl-secret DHHC-1:03:MTMxZmVlY2RhYTVjOWEwZDUwOTRjYTJmNzk3ZTdkNzJiM2I1M2QwMWFiNzc2MWJlZGYzMTJiYTliZTYwYTU0ZSb4+Oc=: 00:10:01.165 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:00:OGUwNzE0YzY3OWYxM2FkYjkzMDc3Y2RkYWI0NTA1MzA1ODM5YTk5Mzg2N2I0NzVjpwCJUQ==: --dhchap-ctrl-secret DHHC-1:03:MTMxZmVlY2RhYTVjOWEwZDUwOTRjYTJmNzk3ZTdkNzJiM2I1M2QwMWFiNzc2MWJlZGYzMTJiYTliZTYwYTU0ZSb4+Oc=: 00:10:01.732 10:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:01.732 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:01.732 10:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:10:01.732 10:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.732 10:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:01.732 10:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.732 10:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:01.732 10:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:01.732 10:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:01.991 10:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:10:01.991 10:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:01.991 10:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:01.991 10:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:01.991 10:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:01.991 10:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:01.991 10:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:01.991 10:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.991 10:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:01.991 10:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.991 10:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:01.991 10:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:01.991 10:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:02.559 00:10:02.559 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:02.559 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:02.559 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:02.818 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:02.818 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:02.818 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.818 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:02.818 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.818 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:02.818 { 00:10:02.818 "cntlid": 19, 00:10:02.818 "qid": 0, 00:10:02.818 "state": "enabled", 00:10:02.818 "thread": "nvmf_tgt_poll_group_000", 00:10:02.818 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:10:02.818 "listen_address": { 00:10:02.818 "trtype": "TCP", 00:10:02.818 "adrfam": "IPv4", 00:10:02.818 "traddr": "10.0.0.3", 00:10:02.818 "trsvcid": "4420" 00:10:02.818 }, 00:10:02.818 "peer_address": { 00:10:02.818 "trtype": "TCP", 00:10:02.818 "adrfam": "IPv4", 00:10:02.818 "traddr": "10.0.0.1", 00:10:02.818 "trsvcid": "53436" 00:10:02.818 }, 00:10:02.818 "auth": { 00:10:02.818 "state": "completed", 00:10:02.818 "digest": "sha256", 00:10:02.818 "dhgroup": "ffdhe3072" 00:10:02.818 } 00:10:02.818 } 00:10:02.818 ]' 00:10:02.818 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:02.818 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:02.818 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:02.818 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:02.818 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:02.818 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:02.818 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:02.818 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:03.077 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTg1MWIyY2JjZjdjNWE4ZTJlMGI0Mjk1YjcxZDMzYzFw9kV5: --dhchap-ctrl-secret DHHC-1:02:OWM0NGE2NDVkNzczZGUyNzgwZmQ3OTNhYTBkNmI2Mjg0ZjQzNjc1OTY5YzYwMmMyyPVorw==: 00:10:03.077 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:01:YTg1MWIyY2JjZjdjNWE4ZTJlMGI0Mjk1YjcxZDMzYzFw9kV5: --dhchap-ctrl-secret DHHC-1:02:OWM0NGE2NDVkNzczZGUyNzgwZmQ3OTNhYTBkNmI2Mjg0ZjQzNjc1OTY5YzYwMmMyyPVorw==: 00:10:04.013 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:04.013 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:04.013 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:10:04.013 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.013 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:04.013 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.013 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:04.013 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:04.013 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:04.272 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:10:04.272 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:04.272 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:04.272 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:04.272 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:04.272 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:04.272 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:04.272 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.272 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:04.272 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.272 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:04.272 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:04.272 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:04.531 00:10:04.531 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:04.531 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:04.531 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:05.098 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:05.098 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:05.098 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.098 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:05.098 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.098 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:05.098 { 00:10:05.098 "cntlid": 21, 00:10:05.098 "qid": 0, 00:10:05.098 "state": "enabled", 00:10:05.098 "thread": "nvmf_tgt_poll_group_000", 00:10:05.098 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:10:05.098 "listen_address": { 00:10:05.098 "trtype": "TCP", 00:10:05.098 "adrfam": "IPv4", 00:10:05.098 "traddr": "10.0.0.3", 00:10:05.098 "trsvcid": "4420" 00:10:05.098 }, 00:10:05.098 "peer_address": { 00:10:05.098 "trtype": "TCP", 00:10:05.098 "adrfam": "IPv4", 00:10:05.098 "traddr": "10.0.0.1", 00:10:05.098 "trsvcid": "53458" 00:10:05.098 }, 00:10:05.098 "auth": { 00:10:05.098 "state": "completed", 00:10:05.098 "digest": "sha256", 00:10:05.098 "dhgroup": "ffdhe3072" 00:10:05.098 } 00:10:05.098 } 00:10:05.098 ]' 00:10:05.098 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:05.098 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:05.098 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:05.098 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:05.098 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:05.098 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:05.098 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:05.098 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:05.356 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDlkMTBiMjNmMWJhMGI1MjNkMTE3YWE0ODU2MGJiNWUwODI0NzQ2MDJiMzU2YmNiHwH+NA==: --dhchap-ctrl-secret DHHC-1:01:MjUwY2E2OGIxZmNhNDI4Mzc5NmZmYTE4ODliMDA0ZDicMihN: 00:10:05.356 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:02:NDlkMTBiMjNmMWJhMGI1MjNkMTE3YWE0ODU2MGJiNWUwODI0NzQ2MDJiMzU2YmNiHwH+NA==: --dhchap-ctrl-secret DHHC-1:01:MjUwY2E2OGIxZmNhNDI4Mzc5NmZmYTE4ODliMDA0ZDicMihN: 00:10:06.290 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:06.290 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:06.290 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:10:06.290 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.290 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:06.290 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.290 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:06.290 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:06.290 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:06.548 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:10:06.548 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:06.548 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:06.548 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:06.548 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:06.548 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:06.548 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key3 00:10:06.548 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.548 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:06.548 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.548 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:06.548 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:06.548 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:06.806 00:10:06.806 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:06.806 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:06.806 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:07.373 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:07.373 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:07.373 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.373 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:07.373 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.373 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:07.373 { 00:10:07.373 "cntlid": 23, 00:10:07.373 "qid": 0, 00:10:07.373 "state": "enabled", 00:10:07.373 "thread": "nvmf_tgt_poll_group_000", 00:10:07.373 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:10:07.373 "listen_address": { 00:10:07.373 "trtype": "TCP", 00:10:07.373 "adrfam": "IPv4", 00:10:07.373 "traddr": "10.0.0.3", 00:10:07.373 "trsvcid": "4420" 00:10:07.373 }, 00:10:07.373 "peer_address": { 00:10:07.373 "trtype": "TCP", 00:10:07.373 "adrfam": "IPv4", 00:10:07.373 "traddr": "10.0.0.1", 00:10:07.373 "trsvcid": "53482" 00:10:07.373 }, 00:10:07.373 "auth": { 00:10:07.373 "state": "completed", 00:10:07.373 "digest": "sha256", 00:10:07.373 "dhgroup": "ffdhe3072" 00:10:07.373 } 00:10:07.373 } 00:10:07.373 ]' 00:10:07.374 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:07.374 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:07.374 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:07.374 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:07.374 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:07.374 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:07.374 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:07.374 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:07.633 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTQzZmI4ZTEzZDM4NTQyZGFmZjE2YmJkODZiMzBkYzRmZTIwMWEzNDRkMTYyYWY4ODE5ZjNiYzlhM2NiOTZhMRbjLRI=: 00:10:07.633 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:03:NTQzZmI4ZTEzZDM4NTQyZGFmZjE2YmJkODZiMzBkYzRmZTIwMWEzNDRkMTYyYWY4ODE5ZjNiYzlhM2NiOTZhMRbjLRI=: 00:10:08.592 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:08.592 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:08.592 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:10:08.592 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.592 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:08.592 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.592 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:08.592 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:08.592 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:08.592 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:08.592 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:10:08.592 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:08.592 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:08.592 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:08.593 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:08.593 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:08.593 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:08.593 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.593 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:08.593 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.593 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:08.593 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:08.593 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:09.184 00:10:09.184 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:09.184 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:09.184 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:09.444 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:09.444 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:09.444 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.444 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:09.444 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.444 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:09.444 { 00:10:09.444 "cntlid": 25, 00:10:09.444 "qid": 0, 00:10:09.444 "state": "enabled", 00:10:09.444 "thread": "nvmf_tgt_poll_group_000", 00:10:09.444 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:10:09.444 "listen_address": { 00:10:09.444 "trtype": "TCP", 00:10:09.444 "adrfam": "IPv4", 00:10:09.444 "traddr": "10.0.0.3", 00:10:09.444 "trsvcid": "4420" 00:10:09.444 }, 00:10:09.444 "peer_address": { 00:10:09.444 "trtype": "TCP", 00:10:09.444 "adrfam": "IPv4", 00:10:09.444 "traddr": "10.0.0.1", 00:10:09.444 "trsvcid": "51144" 00:10:09.444 }, 00:10:09.444 "auth": { 00:10:09.444 "state": "completed", 00:10:09.444 "digest": "sha256", 00:10:09.444 "dhgroup": "ffdhe4096" 00:10:09.444 } 00:10:09.444 } 00:10:09.444 ]' 00:10:09.444 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:09.444 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:09.444 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:09.444 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:09.444 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:09.444 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:09.444 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:09.444 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:10.011 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGUwNzE0YzY3OWYxM2FkYjkzMDc3Y2RkYWI0NTA1MzA1ODM5YTk5Mzg2N2I0NzVjpwCJUQ==: --dhchap-ctrl-secret DHHC-1:03:MTMxZmVlY2RhYTVjOWEwZDUwOTRjYTJmNzk3ZTdkNzJiM2I1M2QwMWFiNzc2MWJlZGYzMTJiYTliZTYwYTU0ZSb4+Oc=: 00:10:10.011 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:00:OGUwNzE0YzY3OWYxM2FkYjkzMDc3Y2RkYWI0NTA1MzA1ODM5YTk5Mzg2N2I0NzVjpwCJUQ==: --dhchap-ctrl-secret DHHC-1:03:MTMxZmVlY2RhYTVjOWEwZDUwOTRjYTJmNzk3ZTdkNzJiM2I1M2QwMWFiNzc2MWJlZGYzMTJiYTliZTYwYTU0ZSb4+Oc=: 00:10:10.580 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:10.580 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:10.580 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:10:10.580 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.580 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:10.580 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.580 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:10.580 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:10.580 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:10.839 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:10:10.839 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:10.839 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:10.839 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:10.839 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:10.839 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:10.839 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:10.839 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.839 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:10.839 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.839 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:10.839 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:10.839 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:11.407 00:10:11.407 10:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:11.407 10:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:11.407 10:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:11.666 10:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:11.666 10:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:11.666 10:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.666 10:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:11.666 10:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.666 10:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:11.666 { 00:10:11.666 "cntlid": 27, 00:10:11.666 "qid": 0, 00:10:11.666 "state": "enabled", 00:10:11.666 "thread": "nvmf_tgt_poll_group_000", 00:10:11.666 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:10:11.666 "listen_address": { 00:10:11.666 "trtype": "TCP", 00:10:11.666 "adrfam": "IPv4", 00:10:11.666 "traddr": "10.0.0.3", 00:10:11.666 "trsvcid": "4420" 00:10:11.666 }, 00:10:11.666 "peer_address": { 00:10:11.666 "trtype": "TCP", 00:10:11.666 "adrfam": "IPv4", 00:10:11.666 "traddr": "10.0.0.1", 00:10:11.666 "trsvcid": "51168" 00:10:11.666 }, 00:10:11.666 "auth": { 00:10:11.666 "state": "completed", 00:10:11.666 "digest": "sha256", 00:10:11.666 "dhgroup": "ffdhe4096" 00:10:11.666 } 00:10:11.666 } 00:10:11.666 ]' 00:10:11.666 10:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:11.925 10:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:11.925 10:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:11.925 10:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:11.925 10:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:11.925 10:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:11.925 10:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:11.925 10:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:12.185 10:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTg1MWIyY2JjZjdjNWE4ZTJlMGI0Mjk1YjcxZDMzYzFw9kV5: --dhchap-ctrl-secret DHHC-1:02:OWM0NGE2NDVkNzczZGUyNzgwZmQ3OTNhYTBkNmI2Mjg0ZjQzNjc1OTY5YzYwMmMyyPVorw==: 00:10:12.185 10:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:01:YTg1MWIyY2JjZjdjNWE4ZTJlMGI0Mjk1YjcxZDMzYzFw9kV5: --dhchap-ctrl-secret DHHC-1:02:OWM0NGE2NDVkNzczZGUyNzgwZmQ3OTNhYTBkNmI2Mjg0ZjQzNjc1OTY5YzYwMmMyyPVorw==: 00:10:12.753 10:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:12.753 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:12.753 10:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:10:12.753 10:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.753 10:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:12.753 10:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.753 10:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:12.753 10:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:13.013 10:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:13.272 10:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:10:13.272 10:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:13.272 10:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:13.272 10:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:13.272 10:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:13.272 10:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:13.272 10:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:13.272 10:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.272 10:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.272 10:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.272 10:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:13.272 10:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:13.272 10:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:13.530 00:10:13.530 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:13.530 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:13.530 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:13.789 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:13.789 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:13.789 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.789 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.789 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.789 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:13.789 { 00:10:13.789 "cntlid": 29, 00:10:13.789 "qid": 0, 00:10:13.789 "state": "enabled", 00:10:13.789 "thread": "nvmf_tgt_poll_group_000", 00:10:13.789 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:10:13.789 "listen_address": { 00:10:13.789 "trtype": "TCP", 00:10:13.789 "adrfam": "IPv4", 00:10:13.789 "traddr": "10.0.0.3", 00:10:13.789 "trsvcid": "4420" 00:10:13.789 }, 00:10:13.789 "peer_address": { 00:10:13.789 "trtype": "TCP", 00:10:13.789 "adrfam": "IPv4", 00:10:13.789 "traddr": "10.0.0.1", 00:10:13.789 "trsvcid": "51194" 00:10:13.789 }, 00:10:13.789 "auth": { 00:10:13.789 "state": "completed", 00:10:13.789 "digest": "sha256", 00:10:13.789 "dhgroup": "ffdhe4096" 00:10:13.789 } 00:10:13.789 } 00:10:13.789 ]' 00:10:13.789 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:14.051 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:14.051 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:14.051 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:14.051 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:14.051 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:14.051 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:14.051 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:14.309 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDlkMTBiMjNmMWJhMGI1MjNkMTE3YWE0ODU2MGJiNWUwODI0NzQ2MDJiMzU2YmNiHwH+NA==: --dhchap-ctrl-secret DHHC-1:01:MjUwY2E2OGIxZmNhNDI4Mzc5NmZmYTE4ODliMDA0ZDicMihN: 00:10:14.309 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:02:NDlkMTBiMjNmMWJhMGI1MjNkMTE3YWE0ODU2MGJiNWUwODI0NzQ2MDJiMzU2YmNiHwH+NA==: --dhchap-ctrl-secret DHHC-1:01:MjUwY2E2OGIxZmNhNDI4Mzc5NmZmYTE4ODliMDA0ZDicMihN: 00:10:15.244 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:15.244 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:15.244 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:10:15.244 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.244 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:15.244 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.244 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:15.244 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:15.244 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:15.244 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:10:15.244 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:15.244 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:15.244 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:15.244 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:15.244 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:15.244 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key3 00:10:15.244 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.244 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:15.244 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.244 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:15.244 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:15.244 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:15.811 00:10:15.811 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:15.811 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:15.811 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:16.070 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:16.070 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:16.070 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.070 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.070 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.070 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:16.070 { 00:10:16.070 "cntlid": 31, 00:10:16.070 "qid": 0, 00:10:16.070 "state": "enabled", 00:10:16.070 "thread": "nvmf_tgt_poll_group_000", 00:10:16.070 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:10:16.070 "listen_address": { 00:10:16.070 "trtype": "TCP", 00:10:16.070 "adrfam": "IPv4", 00:10:16.070 "traddr": "10.0.0.3", 00:10:16.070 "trsvcid": "4420" 00:10:16.070 }, 00:10:16.070 "peer_address": { 00:10:16.070 "trtype": "TCP", 00:10:16.070 "adrfam": "IPv4", 00:10:16.070 "traddr": "10.0.0.1", 00:10:16.070 "trsvcid": "51220" 00:10:16.070 }, 00:10:16.070 "auth": { 00:10:16.070 "state": "completed", 00:10:16.070 "digest": "sha256", 00:10:16.070 "dhgroup": "ffdhe4096" 00:10:16.070 } 00:10:16.070 } 00:10:16.071 ]' 00:10:16.071 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:16.071 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:16.071 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:16.071 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:16.071 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:16.071 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:16.071 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:16.071 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:16.330 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTQzZmI4ZTEzZDM4NTQyZGFmZjE2YmJkODZiMzBkYzRmZTIwMWEzNDRkMTYyYWY4ODE5ZjNiYzlhM2NiOTZhMRbjLRI=: 00:10:16.330 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:03:NTQzZmI4ZTEzZDM4NTQyZGFmZjE2YmJkODZiMzBkYzRmZTIwMWEzNDRkMTYyYWY4ODE5ZjNiYzlhM2NiOTZhMRbjLRI=: 00:10:17.267 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:17.267 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:17.267 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:10:17.267 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.267 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:17.267 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.267 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:17.267 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:17.267 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:17.267 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:17.527 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:10:17.527 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:17.527 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:17.527 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:17.527 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:17.527 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:17.527 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:17.527 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.527 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:17.527 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.527 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:17.527 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:17.527 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:17.786 00:10:17.786 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:17.786 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:17.786 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:18.354 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:18.354 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:18.354 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.354 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:18.354 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.354 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:18.354 { 00:10:18.354 "cntlid": 33, 00:10:18.354 "qid": 0, 00:10:18.354 "state": "enabled", 00:10:18.354 "thread": "nvmf_tgt_poll_group_000", 00:10:18.354 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:10:18.354 "listen_address": { 00:10:18.354 "trtype": "TCP", 00:10:18.354 "adrfam": "IPv4", 00:10:18.354 "traddr": "10.0.0.3", 00:10:18.354 "trsvcid": "4420" 00:10:18.354 }, 00:10:18.354 "peer_address": { 00:10:18.354 "trtype": "TCP", 00:10:18.354 "adrfam": "IPv4", 00:10:18.354 "traddr": "10.0.0.1", 00:10:18.354 "trsvcid": "38356" 00:10:18.354 }, 00:10:18.354 "auth": { 00:10:18.354 "state": "completed", 00:10:18.354 "digest": "sha256", 00:10:18.354 "dhgroup": "ffdhe6144" 00:10:18.354 } 00:10:18.354 } 00:10:18.354 ]' 00:10:18.354 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:18.354 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:18.354 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:18.354 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:18.354 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:18.354 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:18.354 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:18.354 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:18.613 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGUwNzE0YzY3OWYxM2FkYjkzMDc3Y2RkYWI0NTA1MzA1ODM5YTk5Mzg2N2I0NzVjpwCJUQ==: --dhchap-ctrl-secret DHHC-1:03:MTMxZmVlY2RhYTVjOWEwZDUwOTRjYTJmNzk3ZTdkNzJiM2I1M2QwMWFiNzc2MWJlZGYzMTJiYTliZTYwYTU0ZSb4+Oc=: 00:10:18.613 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:00:OGUwNzE0YzY3OWYxM2FkYjkzMDc3Y2RkYWI0NTA1MzA1ODM5YTk5Mzg2N2I0NzVjpwCJUQ==: --dhchap-ctrl-secret DHHC-1:03:MTMxZmVlY2RhYTVjOWEwZDUwOTRjYTJmNzk3ZTdkNzJiM2I1M2QwMWFiNzc2MWJlZGYzMTJiYTliZTYwYTU0ZSb4+Oc=: 00:10:19.181 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:19.181 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:19.181 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:10:19.181 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.181 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:19.181 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.181 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:19.181 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:19.181 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:19.750 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:10:19.750 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:19.750 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:19.750 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:19.750 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:19.750 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:19.750 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:19.750 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.750 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:19.750 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.750 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:19.750 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:19.750 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:20.009 00:10:20.009 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:20.009 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:20.009 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:20.575 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:20.575 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:20.575 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.575 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:20.575 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.575 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:20.575 { 00:10:20.575 "cntlid": 35, 00:10:20.575 "qid": 0, 00:10:20.575 "state": "enabled", 00:10:20.575 "thread": "nvmf_tgt_poll_group_000", 00:10:20.575 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:10:20.575 "listen_address": { 00:10:20.575 "trtype": "TCP", 00:10:20.575 "adrfam": "IPv4", 00:10:20.575 "traddr": "10.0.0.3", 00:10:20.575 "trsvcid": "4420" 00:10:20.575 }, 00:10:20.575 "peer_address": { 00:10:20.575 "trtype": "TCP", 00:10:20.575 "adrfam": "IPv4", 00:10:20.575 "traddr": "10.0.0.1", 00:10:20.575 "trsvcid": "38374" 00:10:20.575 }, 00:10:20.575 "auth": { 00:10:20.575 "state": "completed", 00:10:20.575 "digest": "sha256", 00:10:20.575 "dhgroup": "ffdhe6144" 00:10:20.575 } 00:10:20.575 } 00:10:20.575 ]' 00:10:20.575 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:20.575 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:20.575 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:20.575 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:20.575 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:20.575 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:20.575 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:20.575 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:20.833 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTg1MWIyY2JjZjdjNWE4ZTJlMGI0Mjk1YjcxZDMzYzFw9kV5: --dhchap-ctrl-secret DHHC-1:02:OWM0NGE2NDVkNzczZGUyNzgwZmQ3OTNhYTBkNmI2Mjg0ZjQzNjc1OTY5YzYwMmMyyPVorw==: 00:10:20.833 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:01:YTg1MWIyY2JjZjdjNWE4ZTJlMGI0Mjk1YjcxZDMzYzFw9kV5: --dhchap-ctrl-secret DHHC-1:02:OWM0NGE2NDVkNzczZGUyNzgwZmQ3OTNhYTBkNmI2Mjg0ZjQzNjc1OTY5YzYwMmMyyPVorw==: 00:10:21.400 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:21.400 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:21.400 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:10:21.400 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.400 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.400 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.400 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:21.400 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:21.400 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:21.967 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:10:21.967 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:21.967 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:21.967 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:21.967 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:21.967 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:21.967 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:21.967 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.967 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.967 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.967 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:21.967 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:21.967 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:22.226 00:10:22.484 10:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:22.484 10:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:22.484 10:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:22.742 10:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:22.742 10:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:22.742 10:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.742 10:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.742 10:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.742 10:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:22.742 { 00:10:22.742 "cntlid": 37, 00:10:22.742 "qid": 0, 00:10:22.742 "state": "enabled", 00:10:22.742 "thread": "nvmf_tgt_poll_group_000", 00:10:22.742 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:10:22.742 "listen_address": { 00:10:22.742 "trtype": "TCP", 00:10:22.742 "adrfam": "IPv4", 00:10:22.742 "traddr": "10.0.0.3", 00:10:22.742 "trsvcid": "4420" 00:10:22.742 }, 00:10:22.742 "peer_address": { 00:10:22.742 "trtype": "TCP", 00:10:22.742 "adrfam": "IPv4", 00:10:22.743 "traddr": "10.0.0.1", 00:10:22.743 "trsvcid": "38394" 00:10:22.743 }, 00:10:22.743 "auth": { 00:10:22.743 "state": "completed", 00:10:22.743 "digest": "sha256", 00:10:22.743 "dhgroup": "ffdhe6144" 00:10:22.743 } 00:10:22.743 } 00:10:22.743 ]' 00:10:22.743 10:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:22.743 10:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:22.743 10:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:22.743 10:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:22.743 10:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:22.743 10:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:22.743 10:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:22.743 10:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:23.001 10:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDlkMTBiMjNmMWJhMGI1MjNkMTE3YWE0ODU2MGJiNWUwODI0NzQ2MDJiMzU2YmNiHwH+NA==: --dhchap-ctrl-secret DHHC-1:01:MjUwY2E2OGIxZmNhNDI4Mzc5NmZmYTE4ODliMDA0ZDicMihN: 00:10:23.001 10:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:02:NDlkMTBiMjNmMWJhMGI1MjNkMTE3YWE0ODU2MGJiNWUwODI0NzQ2MDJiMzU2YmNiHwH+NA==: --dhchap-ctrl-secret DHHC-1:01:MjUwY2E2OGIxZmNhNDI4Mzc5NmZmYTE4ODliMDA0ZDicMihN: 00:10:23.935 10:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:23.935 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:23.935 10:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:10:23.935 10:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.935 10:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.935 10:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.935 10:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:23.935 10:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:23.935 10:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:24.193 10:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:10:24.193 10:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:24.193 10:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:24.194 10:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:24.194 10:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:24.194 10:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:24.194 10:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key3 00:10:24.194 10:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.194 10:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.194 10:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.194 10:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:24.194 10:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:24.194 10:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:24.761 00:10:24.761 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:24.761 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:24.761 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:25.021 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:25.021 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:25.021 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.021 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.021 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.021 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:25.021 { 00:10:25.021 "cntlid": 39, 00:10:25.021 "qid": 0, 00:10:25.021 "state": "enabled", 00:10:25.021 "thread": "nvmf_tgt_poll_group_000", 00:10:25.021 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:10:25.021 "listen_address": { 00:10:25.021 "trtype": "TCP", 00:10:25.021 "adrfam": "IPv4", 00:10:25.021 "traddr": "10.0.0.3", 00:10:25.021 "trsvcid": "4420" 00:10:25.021 }, 00:10:25.021 "peer_address": { 00:10:25.021 "trtype": "TCP", 00:10:25.021 "adrfam": "IPv4", 00:10:25.021 "traddr": "10.0.0.1", 00:10:25.021 "trsvcid": "38416" 00:10:25.021 }, 00:10:25.021 "auth": { 00:10:25.021 "state": "completed", 00:10:25.021 "digest": "sha256", 00:10:25.021 "dhgroup": "ffdhe6144" 00:10:25.021 } 00:10:25.021 } 00:10:25.021 ]' 00:10:25.021 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:25.021 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:25.021 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:25.021 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:25.021 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:25.021 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:25.021 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:25.021 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:25.587 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTQzZmI4ZTEzZDM4NTQyZGFmZjE2YmJkODZiMzBkYzRmZTIwMWEzNDRkMTYyYWY4ODE5ZjNiYzlhM2NiOTZhMRbjLRI=: 00:10:25.587 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:03:NTQzZmI4ZTEzZDM4NTQyZGFmZjE2YmJkODZiMzBkYzRmZTIwMWEzNDRkMTYyYWY4ODE5ZjNiYzlhM2NiOTZhMRbjLRI=: 00:10:26.155 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:26.155 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:26.155 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:10:26.155 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.155 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.155 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.155 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:26.155 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:26.155 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:26.155 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:26.413 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:10:26.413 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:26.413 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:26.413 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:26.413 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:26.413 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:26.413 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:26.413 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.413 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.414 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.414 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:26.414 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:26.414 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:26.979 00:10:26.979 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:26.979 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:26.979 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:27.545 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:27.545 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:27.545 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.545 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.545 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.545 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:27.545 { 00:10:27.545 "cntlid": 41, 00:10:27.545 "qid": 0, 00:10:27.545 "state": "enabled", 00:10:27.545 "thread": "nvmf_tgt_poll_group_000", 00:10:27.545 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:10:27.545 "listen_address": { 00:10:27.545 "trtype": "TCP", 00:10:27.545 "adrfam": "IPv4", 00:10:27.545 "traddr": "10.0.0.3", 00:10:27.545 "trsvcid": "4420" 00:10:27.545 }, 00:10:27.545 "peer_address": { 00:10:27.545 "trtype": "TCP", 00:10:27.545 "adrfam": "IPv4", 00:10:27.545 "traddr": "10.0.0.1", 00:10:27.545 "trsvcid": "38438" 00:10:27.545 }, 00:10:27.545 "auth": { 00:10:27.545 "state": "completed", 00:10:27.545 "digest": "sha256", 00:10:27.545 "dhgroup": "ffdhe8192" 00:10:27.545 } 00:10:27.545 } 00:10:27.545 ]' 00:10:27.545 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:27.545 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:27.545 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:27.545 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:27.545 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:27.545 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:27.545 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:27.545 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:27.803 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGUwNzE0YzY3OWYxM2FkYjkzMDc3Y2RkYWI0NTA1MzA1ODM5YTk5Mzg2N2I0NzVjpwCJUQ==: --dhchap-ctrl-secret DHHC-1:03:MTMxZmVlY2RhYTVjOWEwZDUwOTRjYTJmNzk3ZTdkNzJiM2I1M2QwMWFiNzc2MWJlZGYzMTJiYTliZTYwYTU0ZSb4+Oc=: 00:10:27.803 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:00:OGUwNzE0YzY3OWYxM2FkYjkzMDc3Y2RkYWI0NTA1MzA1ODM5YTk5Mzg2N2I0NzVjpwCJUQ==: --dhchap-ctrl-secret DHHC-1:03:MTMxZmVlY2RhYTVjOWEwZDUwOTRjYTJmNzk3ZTdkNzJiM2I1M2QwMWFiNzc2MWJlZGYzMTJiYTliZTYwYTU0ZSb4+Oc=: 00:10:28.737 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:28.737 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:28.737 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:10:28.737 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.737 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.737 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.737 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:28.737 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:28.737 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:28.996 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:10:28.996 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:28.996 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:28.996 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:28.996 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:28.996 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:28.996 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:28.996 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.996 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.996 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.996 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:28.996 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:28.996 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:29.563 00:10:29.563 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:29.563 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:29.563 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:29.821 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:29.821 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:29.821 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.821 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.821 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.821 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:29.821 { 00:10:29.821 "cntlid": 43, 00:10:29.821 "qid": 0, 00:10:29.821 "state": "enabled", 00:10:29.821 "thread": "nvmf_tgt_poll_group_000", 00:10:29.821 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:10:29.821 "listen_address": { 00:10:29.821 "trtype": "TCP", 00:10:29.821 "adrfam": "IPv4", 00:10:29.821 "traddr": "10.0.0.3", 00:10:29.821 "trsvcid": "4420" 00:10:29.821 }, 00:10:29.821 "peer_address": { 00:10:29.821 "trtype": "TCP", 00:10:29.821 "adrfam": "IPv4", 00:10:29.821 "traddr": "10.0.0.1", 00:10:29.821 "trsvcid": "56520" 00:10:29.821 }, 00:10:29.821 "auth": { 00:10:29.821 "state": "completed", 00:10:29.821 "digest": "sha256", 00:10:29.821 "dhgroup": "ffdhe8192" 00:10:29.821 } 00:10:29.821 } 00:10:29.821 ]' 00:10:29.821 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:29.821 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:29.821 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:30.078 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:30.078 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:30.078 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:30.078 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:30.078 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:30.336 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTg1MWIyY2JjZjdjNWE4ZTJlMGI0Mjk1YjcxZDMzYzFw9kV5: --dhchap-ctrl-secret DHHC-1:02:OWM0NGE2NDVkNzczZGUyNzgwZmQ3OTNhYTBkNmI2Mjg0ZjQzNjc1OTY5YzYwMmMyyPVorw==: 00:10:30.336 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:01:YTg1MWIyY2JjZjdjNWE4ZTJlMGI0Mjk1YjcxZDMzYzFw9kV5: --dhchap-ctrl-secret DHHC-1:02:OWM0NGE2NDVkNzczZGUyNzgwZmQ3OTNhYTBkNmI2Mjg0ZjQzNjc1OTY5YzYwMmMyyPVorw==: 00:10:30.901 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:30.901 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:30.901 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:10:30.901 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.901 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.901 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.901 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:30.901 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:30.901 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:31.159 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:10:31.159 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:31.159 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:31.159 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:31.159 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:31.159 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:31.159 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:31.159 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.159 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:31.159 10:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.159 10:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:31.159 10:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:31.159 10:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:32.096 00:10:32.096 10:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:32.096 10:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:32.096 10:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:32.354 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:32.354 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:32.354 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.354 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.354 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.354 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:32.354 { 00:10:32.354 "cntlid": 45, 00:10:32.354 "qid": 0, 00:10:32.354 "state": "enabled", 00:10:32.354 "thread": "nvmf_tgt_poll_group_000", 00:10:32.354 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:10:32.354 "listen_address": { 00:10:32.354 "trtype": "TCP", 00:10:32.354 "adrfam": "IPv4", 00:10:32.354 "traddr": "10.0.0.3", 00:10:32.354 "trsvcid": "4420" 00:10:32.354 }, 00:10:32.354 "peer_address": { 00:10:32.354 "trtype": "TCP", 00:10:32.354 "adrfam": "IPv4", 00:10:32.354 "traddr": "10.0.0.1", 00:10:32.354 "trsvcid": "56536" 00:10:32.354 }, 00:10:32.354 "auth": { 00:10:32.354 "state": "completed", 00:10:32.354 "digest": "sha256", 00:10:32.354 "dhgroup": "ffdhe8192" 00:10:32.354 } 00:10:32.354 } 00:10:32.354 ]' 00:10:32.354 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:32.354 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:32.354 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:32.354 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:32.354 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:32.354 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:32.354 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:32.354 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:32.922 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDlkMTBiMjNmMWJhMGI1MjNkMTE3YWE0ODU2MGJiNWUwODI0NzQ2MDJiMzU2YmNiHwH+NA==: --dhchap-ctrl-secret DHHC-1:01:MjUwY2E2OGIxZmNhNDI4Mzc5NmZmYTE4ODliMDA0ZDicMihN: 00:10:32.922 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:02:NDlkMTBiMjNmMWJhMGI1MjNkMTE3YWE0ODU2MGJiNWUwODI0NzQ2MDJiMzU2YmNiHwH+NA==: --dhchap-ctrl-secret DHHC-1:01:MjUwY2E2OGIxZmNhNDI4Mzc5NmZmYTE4ODliMDA0ZDicMihN: 00:10:33.490 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:33.490 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:33.490 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:10:33.490 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.490 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.490 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.490 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:33.490 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:33.490 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:33.749 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:10:33.749 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:33.749 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:33.749 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:33.749 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:33.749 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:33.749 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key3 00:10:33.749 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.749 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.749 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.749 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:33.749 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:33.749 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:34.682 00:10:34.682 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:34.682 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:34.682 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:34.941 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:34.941 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:34.941 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.941 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:34.941 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.941 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:34.941 { 00:10:34.941 "cntlid": 47, 00:10:34.941 "qid": 0, 00:10:34.941 "state": "enabled", 00:10:34.941 "thread": "nvmf_tgt_poll_group_000", 00:10:34.941 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:10:34.941 "listen_address": { 00:10:34.941 "trtype": "TCP", 00:10:34.941 "adrfam": "IPv4", 00:10:34.941 "traddr": "10.0.0.3", 00:10:34.941 "trsvcid": "4420" 00:10:34.941 }, 00:10:34.941 "peer_address": { 00:10:34.941 "trtype": "TCP", 00:10:34.941 "adrfam": "IPv4", 00:10:34.941 "traddr": "10.0.0.1", 00:10:34.941 "trsvcid": "56558" 00:10:34.941 }, 00:10:34.941 "auth": { 00:10:34.941 "state": "completed", 00:10:34.941 "digest": "sha256", 00:10:34.941 "dhgroup": "ffdhe8192" 00:10:34.941 } 00:10:34.941 } 00:10:34.941 ]' 00:10:34.941 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:34.941 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:34.941 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:34.941 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:34.941 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:34.941 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:34.941 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:34.941 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:35.508 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTQzZmI4ZTEzZDM4NTQyZGFmZjE2YmJkODZiMzBkYzRmZTIwMWEzNDRkMTYyYWY4ODE5ZjNiYzlhM2NiOTZhMRbjLRI=: 00:10:35.508 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:03:NTQzZmI4ZTEzZDM4NTQyZGFmZjE2YmJkODZiMzBkYzRmZTIwMWEzNDRkMTYyYWY4ODE5ZjNiYzlhM2NiOTZhMRbjLRI=: 00:10:36.073 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:36.073 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:36.073 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:10:36.073 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.073 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.073 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.073 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:10:36.073 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:36.073 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:36.073 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:36.073 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:36.331 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:10:36.331 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:36.331 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:36.331 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:36.331 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:36.331 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:36.331 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:36.331 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.331 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.331 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.331 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:36.331 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:36.331 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:36.588 00:10:36.588 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:36.588 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:36.588 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:37.154 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:37.154 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:37.154 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.154 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.154 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.154 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:37.154 { 00:10:37.154 "cntlid": 49, 00:10:37.154 "qid": 0, 00:10:37.154 "state": "enabled", 00:10:37.154 "thread": "nvmf_tgt_poll_group_000", 00:10:37.154 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:10:37.154 "listen_address": { 00:10:37.154 "trtype": "TCP", 00:10:37.154 "adrfam": "IPv4", 00:10:37.154 "traddr": "10.0.0.3", 00:10:37.154 "trsvcid": "4420" 00:10:37.154 }, 00:10:37.154 "peer_address": { 00:10:37.154 "trtype": "TCP", 00:10:37.154 "adrfam": "IPv4", 00:10:37.154 "traddr": "10.0.0.1", 00:10:37.155 "trsvcid": "56586" 00:10:37.155 }, 00:10:37.155 "auth": { 00:10:37.155 "state": "completed", 00:10:37.155 "digest": "sha384", 00:10:37.155 "dhgroup": "null" 00:10:37.155 } 00:10:37.155 } 00:10:37.155 ]' 00:10:37.155 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:37.155 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:37.155 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:37.155 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:37.155 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:37.155 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:37.155 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:37.155 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:37.413 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGUwNzE0YzY3OWYxM2FkYjkzMDc3Y2RkYWI0NTA1MzA1ODM5YTk5Mzg2N2I0NzVjpwCJUQ==: --dhchap-ctrl-secret DHHC-1:03:MTMxZmVlY2RhYTVjOWEwZDUwOTRjYTJmNzk3ZTdkNzJiM2I1M2QwMWFiNzc2MWJlZGYzMTJiYTliZTYwYTU0ZSb4+Oc=: 00:10:37.413 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:00:OGUwNzE0YzY3OWYxM2FkYjkzMDc3Y2RkYWI0NTA1MzA1ODM5YTk5Mzg2N2I0NzVjpwCJUQ==: --dhchap-ctrl-secret DHHC-1:03:MTMxZmVlY2RhYTVjOWEwZDUwOTRjYTJmNzk3ZTdkNzJiM2I1M2QwMWFiNzc2MWJlZGYzMTJiYTliZTYwYTU0ZSb4+Oc=: 00:10:37.981 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:38.240 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:38.240 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:10:38.240 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.240 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.240 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.240 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:38.240 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:38.240 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:38.498 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:10:38.498 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:38.498 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:38.498 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:38.498 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:38.498 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:38.498 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:38.498 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.498 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.498 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.498 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:38.498 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:38.498 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:38.757 00:10:38.757 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:38.757 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:38.757 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:39.016 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:39.016 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:39.016 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.016 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.016 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.016 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:39.016 { 00:10:39.016 "cntlid": 51, 00:10:39.016 "qid": 0, 00:10:39.016 "state": "enabled", 00:10:39.016 "thread": "nvmf_tgt_poll_group_000", 00:10:39.016 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:10:39.016 "listen_address": { 00:10:39.016 "trtype": "TCP", 00:10:39.016 "adrfam": "IPv4", 00:10:39.016 "traddr": "10.0.0.3", 00:10:39.016 "trsvcid": "4420" 00:10:39.016 }, 00:10:39.016 "peer_address": { 00:10:39.016 "trtype": "TCP", 00:10:39.016 "adrfam": "IPv4", 00:10:39.016 "traddr": "10.0.0.1", 00:10:39.016 "trsvcid": "54544" 00:10:39.016 }, 00:10:39.016 "auth": { 00:10:39.016 "state": "completed", 00:10:39.016 "digest": "sha384", 00:10:39.016 "dhgroup": "null" 00:10:39.016 } 00:10:39.016 } 00:10:39.016 ]' 00:10:39.016 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:39.276 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:39.276 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:39.276 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:39.276 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:39.276 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:39.276 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:39.276 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:39.534 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTg1MWIyY2JjZjdjNWE4ZTJlMGI0Mjk1YjcxZDMzYzFw9kV5: --dhchap-ctrl-secret DHHC-1:02:OWM0NGE2NDVkNzczZGUyNzgwZmQ3OTNhYTBkNmI2Mjg0ZjQzNjc1OTY5YzYwMmMyyPVorw==: 00:10:39.534 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:01:YTg1MWIyY2JjZjdjNWE4ZTJlMGI0Mjk1YjcxZDMzYzFw9kV5: --dhchap-ctrl-secret DHHC-1:02:OWM0NGE2NDVkNzczZGUyNzgwZmQ3OTNhYTBkNmI2Mjg0ZjQzNjc1OTY5YzYwMmMyyPVorw==: 00:10:40.143 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:40.143 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:40.143 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:10:40.143 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.143 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.143 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.143 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:40.143 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:40.144 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:40.405 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:10:40.406 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:40.406 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:40.406 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:40.406 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:40.406 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:40.406 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:40.406 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.406 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.665 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.665 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:40.665 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:40.665 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:40.924 00:10:40.924 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:40.924 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:40.924 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:41.183 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:41.183 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:41.183 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.183 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.183 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.183 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:41.183 { 00:10:41.183 "cntlid": 53, 00:10:41.183 "qid": 0, 00:10:41.183 "state": "enabled", 00:10:41.183 "thread": "nvmf_tgt_poll_group_000", 00:10:41.183 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:10:41.183 "listen_address": { 00:10:41.183 "trtype": "TCP", 00:10:41.183 "adrfam": "IPv4", 00:10:41.183 "traddr": "10.0.0.3", 00:10:41.183 "trsvcid": "4420" 00:10:41.183 }, 00:10:41.183 "peer_address": { 00:10:41.183 "trtype": "TCP", 00:10:41.183 "adrfam": "IPv4", 00:10:41.183 "traddr": "10.0.0.1", 00:10:41.183 "trsvcid": "54562" 00:10:41.183 }, 00:10:41.183 "auth": { 00:10:41.183 "state": "completed", 00:10:41.183 "digest": "sha384", 00:10:41.183 "dhgroup": "null" 00:10:41.183 } 00:10:41.183 } 00:10:41.183 ]' 00:10:41.184 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:41.184 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:41.184 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:41.184 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:41.184 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:41.443 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:41.443 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:41.443 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:41.701 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDlkMTBiMjNmMWJhMGI1MjNkMTE3YWE0ODU2MGJiNWUwODI0NzQ2MDJiMzU2YmNiHwH+NA==: --dhchap-ctrl-secret DHHC-1:01:MjUwY2E2OGIxZmNhNDI4Mzc5NmZmYTE4ODliMDA0ZDicMihN: 00:10:41.701 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:02:NDlkMTBiMjNmMWJhMGI1MjNkMTE3YWE0ODU2MGJiNWUwODI0NzQ2MDJiMzU2YmNiHwH+NA==: --dhchap-ctrl-secret DHHC-1:01:MjUwY2E2OGIxZmNhNDI4Mzc5NmZmYTE4ODliMDA0ZDicMihN: 00:10:42.269 10:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:42.269 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:42.269 10:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:10:42.269 10:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.269 10:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.528 10:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.528 10:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:42.528 10:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:42.528 10:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:42.786 10:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:10:42.786 10:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:42.786 10:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:42.786 10:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:42.786 10:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:42.786 10:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:42.786 10:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key3 00:10:42.786 10:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.786 10:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.786 10:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.786 10:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:42.786 10:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:42.787 10:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:43.045 00:10:43.045 10:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:43.045 10:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:43.045 10:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:43.305 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:43.305 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:43.305 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.305 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.305 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.305 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:43.305 { 00:10:43.305 "cntlid": 55, 00:10:43.305 "qid": 0, 00:10:43.305 "state": "enabled", 00:10:43.305 "thread": "nvmf_tgt_poll_group_000", 00:10:43.305 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:10:43.305 "listen_address": { 00:10:43.305 "trtype": "TCP", 00:10:43.305 "adrfam": "IPv4", 00:10:43.305 "traddr": "10.0.0.3", 00:10:43.305 "trsvcid": "4420" 00:10:43.305 }, 00:10:43.305 "peer_address": { 00:10:43.305 "trtype": "TCP", 00:10:43.305 "adrfam": "IPv4", 00:10:43.305 "traddr": "10.0.0.1", 00:10:43.305 "trsvcid": "54590" 00:10:43.305 }, 00:10:43.305 "auth": { 00:10:43.305 "state": "completed", 00:10:43.305 "digest": "sha384", 00:10:43.305 "dhgroup": "null" 00:10:43.305 } 00:10:43.305 } 00:10:43.305 ]' 00:10:43.305 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:43.305 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:43.305 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:43.305 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:43.305 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:43.564 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:43.564 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:43.564 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:43.822 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTQzZmI4ZTEzZDM4NTQyZGFmZjE2YmJkODZiMzBkYzRmZTIwMWEzNDRkMTYyYWY4ODE5ZjNiYzlhM2NiOTZhMRbjLRI=: 00:10:43.822 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:03:NTQzZmI4ZTEzZDM4NTQyZGFmZjE2YmJkODZiMzBkYzRmZTIwMWEzNDRkMTYyYWY4ODE5ZjNiYzlhM2NiOTZhMRbjLRI=: 00:10:44.395 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:44.395 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:44.395 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:10:44.395 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.395 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.395 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.395 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:44.395 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:44.395 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:44.395 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:44.653 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:10:44.654 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:44.654 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:44.654 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:44.654 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:44.654 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:44.654 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:44.654 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.654 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.654 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.654 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:44.654 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:44.654 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:45.221 00:10:45.221 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:45.221 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:45.221 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:45.221 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:45.221 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:45.221 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.221 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.492 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.492 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:45.492 { 00:10:45.492 "cntlid": 57, 00:10:45.492 "qid": 0, 00:10:45.492 "state": "enabled", 00:10:45.492 "thread": "nvmf_tgt_poll_group_000", 00:10:45.492 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:10:45.492 "listen_address": { 00:10:45.492 "trtype": "TCP", 00:10:45.492 "adrfam": "IPv4", 00:10:45.492 "traddr": "10.0.0.3", 00:10:45.492 "trsvcid": "4420" 00:10:45.492 }, 00:10:45.492 "peer_address": { 00:10:45.492 "trtype": "TCP", 00:10:45.492 "adrfam": "IPv4", 00:10:45.492 "traddr": "10.0.0.1", 00:10:45.492 "trsvcid": "54600" 00:10:45.492 }, 00:10:45.492 "auth": { 00:10:45.492 "state": "completed", 00:10:45.492 "digest": "sha384", 00:10:45.492 "dhgroup": "ffdhe2048" 00:10:45.492 } 00:10:45.492 } 00:10:45.492 ]' 00:10:45.492 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:45.492 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:45.492 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:45.492 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:45.492 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:45.492 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:45.492 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:45.492 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:45.788 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGUwNzE0YzY3OWYxM2FkYjkzMDc3Y2RkYWI0NTA1MzA1ODM5YTk5Mzg2N2I0NzVjpwCJUQ==: --dhchap-ctrl-secret DHHC-1:03:MTMxZmVlY2RhYTVjOWEwZDUwOTRjYTJmNzk3ZTdkNzJiM2I1M2QwMWFiNzc2MWJlZGYzMTJiYTliZTYwYTU0ZSb4+Oc=: 00:10:45.788 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:00:OGUwNzE0YzY3OWYxM2FkYjkzMDc3Y2RkYWI0NTA1MzA1ODM5YTk5Mzg2N2I0NzVjpwCJUQ==: --dhchap-ctrl-secret DHHC-1:03:MTMxZmVlY2RhYTVjOWEwZDUwOTRjYTJmNzk3ZTdkNzJiM2I1M2QwMWFiNzc2MWJlZGYzMTJiYTliZTYwYTU0ZSb4+Oc=: 00:10:46.743 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:46.743 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:46.743 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:10:46.743 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.743 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.743 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.743 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:46.743 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:46.743 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:46.743 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:10:46.743 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:46.743 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:46.743 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:46.743 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:46.743 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:46.743 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:46.743 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.743 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.743 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.743 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:46.743 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:46.743 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:47.310 00:10:47.310 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:47.310 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:47.310 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:47.571 10:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:47.571 10:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:47.571 10:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.571 10:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.571 10:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.571 10:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:47.571 { 00:10:47.571 "cntlid": 59, 00:10:47.571 "qid": 0, 00:10:47.571 "state": "enabled", 00:10:47.571 "thread": "nvmf_tgt_poll_group_000", 00:10:47.571 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:10:47.571 "listen_address": { 00:10:47.571 "trtype": "TCP", 00:10:47.571 "adrfam": "IPv4", 00:10:47.571 "traddr": "10.0.0.3", 00:10:47.571 "trsvcid": "4420" 00:10:47.571 }, 00:10:47.571 "peer_address": { 00:10:47.571 "trtype": "TCP", 00:10:47.571 "adrfam": "IPv4", 00:10:47.571 "traddr": "10.0.0.1", 00:10:47.571 "trsvcid": "54636" 00:10:47.571 }, 00:10:47.571 "auth": { 00:10:47.571 "state": "completed", 00:10:47.571 "digest": "sha384", 00:10:47.571 "dhgroup": "ffdhe2048" 00:10:47.571 } 00:10:47.571 } 00:10:47.571 ]' 00:10:47.571 10:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:47.571 10:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:47.571 10:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:47.571 10:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:47.571 10:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:47.571 10:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:47.571 10:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:47.571 10:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:47.832 10:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTg1MWIyY2JjZjdjNWE4ZTJlMGI0Mjk1YjcxZDMzYzFw9kV5: --dhchap-ctrl-secret DHHC-1:02:OWM0NGE2NDVkNzczZGUyNzgwZmQ3OTNhYTBkNmI2Mjg0ZjQzNjc1OTY5YzYwMmMyyPVorw==: 00:10:47.832 10:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:01:YTg1MWIyY2JjZjdjNWE4ZTJlMGI0Mjk1YjcxZDMzYzFw9kV5: --dhchap-ctrl-secret DHHC-1:02:OWM0NGE2NDVkNzczZGUyNzgwZmQ3OTNhYTBkNmI2Mjg0ZjQzNjc1OTY5YzYwMmMyyPVorw==: 00:10:48.768 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:48.768 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:48.768 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:10:48.768 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.768 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.768 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.768 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:48.768 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:48.768 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:48.768 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:10:48.768 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:48.768 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:48.768 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:48.768 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:48.768 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:48.768 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:48.768 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.768 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.768 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.768 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:48.768 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:48.768 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:49.335 00:10:49.335 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:49.335 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:49.335 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:49.595 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:49.595 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:49.595 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.595 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.595 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.595 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:49.595 { 00:10:49.595 "cntlid": 61, 00:10:49.595 "qid": 0, 00:10:49.595 "state": "enabled", 00:10:49.595 "thread": "nvmf_tgt_poll_group_000", 00:10:49.595 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:10:49.595 "listen_address": { 00:10:49.595 "trtype": "TCP", 00:10:49.595 "adrfam": "IPv4", 00:10:49.595 "traddr": "10.0.0.3", 00:10:49.595 "trsvcid": "4420" 00:10:49.595 }, 00:10:49.595 "peer_address": { 00:10:49.595 "trtype": "TCP", 00:10:49.595 "adrfam": "IPv4", 00:10:49.595 "traddr": "10.0.0.1", 00:10:49.595 "trsvcid": "47246" 00:10:49.595 }, 00:10:49.595 "auth": { 00:10:49.595 "state": "completed", 00:10:49.595 "digest": "sha384", 00:10:49.595 "dhgroup": "ffdhe2048" 00:10:49.595 } 00:10:49.595 } 00:10:49.595 ]' 00:10:49.595 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:49.595 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:49.595 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:49.595 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:49.595 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:49.595 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:49.595 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:49.595 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:50.163 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDlkMTBiMjNmMWJhMGI1MjNkMTE3YWE0ODU2MGJiNWUwODI0NzQ2MDJiMzU2YmNiHwH+NA==: --dhchap-ctrl-secret DHHC-1:01:MjUwY2E2OGIxZmNhNDI4Mzc5NmZmYTE4ODliMDA0ZDicMihN: 00:10:50.163 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:02:NDlkMTBiMjNmMWJhMGI1MjNkMTE3YWE0ODU2MGJiNWUwODI0NzQ2MDJiMzU2YmNiHwH+NA==: --dhchap-ctrl-secret DHHC-1:01:MjUwY2E2OGIxZmNhNDI4Mzc5NmZmYTE4ODliMDA0ZDicMihN: 00:10:50.731 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:50.731 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:50.731 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:10:50.731 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.731 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.731 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.731 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:50.731 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:50.731 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:51.002 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:10:51.002 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:51.002 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:51.002 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:51.002 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:51.002 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:51.002 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key3 00:10:51.002 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.002 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.002 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.002 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:51.002 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:51.002 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:51.589 00:10:51.589 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:51.589 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:51.589 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:51.589 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:51.589 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:51.589 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.589 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.589 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.589 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:51.589 { 00:10:51.589 "cntlid": 63, 00:10:51.589 "qid": 0, 00:10:51.589 "state": "enabled", 00:10:51.589 "thread": "nvmf_tgt_poll_group_000", 00:10:51.589 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:10:51.589 "listen_address": { 00:10:51.589 "trtype": "TCP", 00:10:51.589 "adrfam": "IPv4", 00:10:51.589 "traddr": "10.0.0.3", 00:10:51.589 "trsvcid": "4420" 00:10:51.589 }, 00:10:51.589 "peer_address": { 00:10:51.589 "trtype": "TCP", 00:10:51.589 "adrfam": "IPv4", 00:10:51.589 "traddr": "10.0.0.1", 00:10:51.589 "trsvcid": "47282" 00:10:51.589 }, 00:10:51.589 "auth": { 00:10:51.589 "state": "completed", 00:10:51.589 "digest": "sha384", 00:10:51.589 "dhgroup": "ffdhe2048" 00:10:51.589 } 00:10:51.589 } 00:10:51.589 ]' 00:10:51.589 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:51.848 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:51.848 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:51.848 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:51.848 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:51.848 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:51.848 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:51.848 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:52.108 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTQzZmI4ZTEzZDM4NTQyZGFmZjE2YmJkODZiMzBkYzRmZTIwMWEzNDRkMTYyYWY4ODE5ZjNiYzlhM2NiOTZhMRbjLRI=: 00:10:52.108 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:03:NTQzZmI4ZTEzZDM4NTQyZGFmZjE2YmJkODZiMzBkYzRmZTIwMWEzNDRkMTYyYWY4ODE5ZjNiYzlhM2NiOTZhMRbjLRI=: 00:10:52.676 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:52.676 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:52.676 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:10:52.676 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.676 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.935 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.935 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:52.935 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:52.935 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:52.935 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:53.194 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:10:53.194 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:53.194 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:53.194 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:53.194 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:53.194 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:53.194 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:53.194 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.194 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.194 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.194 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:53.194 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:53.194 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:53.453 00:10:53.453 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:53.453 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:53.453 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:53.711 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:53.711 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:53.711 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.711 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.711 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.711 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:53.711 { 00:10:53.711 "cntlid": 65, 00:10:53.711 "qid": 0, 00:10:53.711 "state": "enabled", 00:10:53.711 "thread": "nvmf_tgt_poll_group_000", 00:10:53.711 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:10:53.711 "listen_address": { 00:10:53.711 "trtype": "TCP", 00:10:53.711 "adrfam": "IPv4", 00:10:53.711 "traddr": "10.0.0.3", 00:10:53.711 "trsvcid": "4420" 00:10:53.711 }, 00:10:53.711 "peer_address": { 00:10:53.711 "trtype": "TCP", 00:10:53.711 "adrfam": "IPv4", 00:10:53.711 "traddr": "10.0.0.1", 00:10:53.711 "trsvcid": "47292" 00:10:53.711 }, 00:10:53.711 "auth": { 00:10:53.711 "state": "completed", 00:10:53.711 "digest": "sha384", 00:10:53.711 "dhgroup": "ffdhe3072" 00:10:53.711 } 00:10:53.711 } 00:10:53.711 ]' 00:10:53.711 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:53.711 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:53.711 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:53.972 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:53.972 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:53.972 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:53.972 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:53.972 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:54.240 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGUwNzE0YzY3OWYxM2FkYjkzMDc3Y2RkYWI0NTA1MzA1ODM5YTk5Mzg2N2I0NzVjpwCJUQ==: --dhchap-ctrl-secret DHHC-1:03:MTMxZmVlY2RhYTVjOWEwZDUwOTRjYTJmNzk3ZTdkNzJiM2I1M2QwMWFiNzc2MWJlZGYzMTJiYTliZTYwYTU0ZSb4+Oc=: 00:10:54.240 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:00:OGUwNzE0YzY3OWYxM2FkYjkzMDc3Y2RkYWI0NTA1MzA1ODM5YTk5Mzg2N2I0NzVjpwCJUQ==: --dhchap-ctrl-secret DHHC-1:03:MTMxZmVlY2RhYTVjOWEwZDUwOTRjYTJmNzk3ZTdkNzJiM2I1M2QwMWFiNzc2MWJlZGYzMTJiYTliZTYwYTU0ZSb4+Oc=: 00:10:54.808 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:54.808 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:54.808 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:10:54.808 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.808 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.808 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.808 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:54.808 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:54.808 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:55.067 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:10:55.067 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:55.067 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:55.067 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:55.067 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:55.067 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:55.067 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:55.067 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.067 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.067 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.067 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:55.067 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:55.067 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:55.325 00:10:55.325 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:55.325 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:55.325 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:55.897 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:55.897 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:55.897 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.897 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.897 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.897 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:55.897 { 00:10:55.897 "cntlid": 67, 00:10:55.897 "qid": 0, 00:10:55.897 "state": "enabled", 00:10:55.897 "thread": "nvmf_tgt_poll_group_000", 00:10:55.897 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:10:55.897 "listen_address": { 00:10:55.897 "trtype": "TCP", 00:10:55.897 "adrfam": "IPv4", 00:10:55.897 "traddr": "10.0.0.3", 00:10:55.897 "trsvcid": "4420" 00:10:55.897 }, 00:10:55.897 "peer_address": { 00:10:55.897 "trtype": "TCP", 00:10:55.897 "adrfam": "IPv4", 00:10:55.897 "traddr": "10.0.0.1", 00:10:55.897 "trsvcid": "47310" 00:10:55.897 }, 00:10:55.897 "auth": { 00:10:55.897 "state": "completed", 00:10:55.897 "digest": "sha384", 00:10:55.897 "dhgroup": "ffdhe3072" 00:10:55.897 } 00:10:55.897 } 00:10:55.897 ]' 00:10:55.897 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:55.897 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:55.897 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:55.897 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:55.897 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:55.897 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:55.897 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:55.897 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:56.156 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTg1MWIyY2JjZjdjNWE4ZTJlMGI0Mjk1YjcxZDMzYzFw9kV5: --dhchap-ctrl-secret DHHC-1:02:OWM0NGE2NDVkNzczZGUyNzgwZmQ3OTNhYTBkNmI2Mjg0ZjQzNjc1OTY5YzYwMmMyyPVorw==: 00:10:56.156 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:01:YTg1MWIyY2JjZjdjNWE4ZTJlMGI0Mjk1YjcxZDMzYzFw9kV5: --dhchap-ctrl-secret DHHC-1:02:OWM0NGE2NDVkNzczZGUyNzgwZmQ3OTNhYTBkNmI2Mjg0ZjQzNjc1OTY5YzYwMmMyyPVorw==: 00:10:56.724 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:56.724 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:56.724 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:10:56.724 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.724 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.724 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.724 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:56.724 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:56.724 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:56.984 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:10:56.984 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:56.984 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:56.984 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:56.984 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:56.984 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:56.984 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:56.984 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.984 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.984 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.984 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:56.984 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:56.984 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:57.552 00:10:57.552 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:57.552 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:57.552 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:57.811 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:57.811 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:57.811 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.811 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.811 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.811 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:57.811 { 00:10:57.811 "cntlid": 69, 00:10:57.811 "qid": 0, 00:10:57.811 "state": "enabled", 00:10:57.811 "thread": "nvmf_tgt_poll_group_000", 00:10:57.811 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:10:57.811 "listen_address": { 00:10:57.811 "trtype": "TCP", 00:10:57.811 "adrfam": "IPv4", 00:10:57.811 "traddr": "10.0.0.3", 00:10:57.811 "trsvcid": "4420" 00:10:57.811 }, 00:10:57.811 "peer_address": { 00:10:57.811 "trtype": "TCP", 00:10:57.811 "adrfam": "IPv4", 00:10:57.811 "traddr": "10.0.0.1", 00:10:57.811 "trsvcid": "47344" 00:10:57.811 }, 00:10:57.811 "auth": { 00:10:57.811 "state": "completed", 00:10:57.811 "digest": "sha384", 00:10:57.811 "dhgroup": "ffdhe3072" 00:10:57.811 } 00:10:57.811 } 00:10:57.811 ]' 00:10:57.811 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:57.811 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:57.811 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:58.070 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:58.070 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:58.070 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:58.070 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:58.070 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:58.328 10:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDlkMTBiMjNmMWJhMGI1MjNkMTE3YWE0ODU2MGJiNWUwODI0NzQ2MDJiMzU2YmNiHwH+NA==: --dhchap-ctrl-secret DHHC-1:01:MjUwY2E2OGIxZmNhNDI4Mzc5NmZmYTE4ODliMDA0ZDicMihN: 00:10:58.329 10:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:02:NDlkMTBiMjNmMWJhMGI1MjNkMTE3YWE0ODU2MGJiNWUwODI0NzQ2MDJiMzU2YmNiHwH+NA==: --dhchap-ctrl-secret DHHC-1:01:MjUwY2E2OGIxZmNhNDI4Mzc5NmZmYTE4ODliMDA0ZDicMihN: 00:10:59.268 10:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:59.268 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:59.268 10:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:10:59.268 10:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.268 10:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.268 10:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.268 10:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:59.268 10:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:59.268 10:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:59.528 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:10:59.528 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:59.528 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:59.528 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:59.528 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:59.528 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:59.528 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key3 00:10:59.528 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.528 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.528 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.528 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:59.528 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:59.528 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:59.785 00:10:59.785 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:59.785 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:59.785 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:00.043 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:00.043 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:00.043 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.043 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.043 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.043 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:00.043 { 00:11:00.043 "cntlid": 71, 00:11:00.043 "qid": 0, 00:11:00.043 "state": "enabled", 00:11:00.043 "thread": "nvmf_tgt_poll_group_000", 00:11:00.043 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:11:00.043 "listen_address": { 00:11:00.043 "trtype": "TCP", 00:11:00.043 "adrfam": "IPv4", 00:11:00.043 "traddr": "10.0.0.3", 00:11:00.043 "trsvcid": "4420" 00:11:00.043 }, 00:11:00.043 "peer_address": { 00:11:00.043 "trtype": "TCP", 00:11:00.043 "adrfam": "IPv4", 00:11:00.043 "traddr": "10.0.0.1", 00:11:00.043 "trsvcid": "48976" 00:11:00.043 }, 00:11:00.043 "auth": { 00:11:00.043 "state": "completed", 00:11:00.043 "digest": "sha384", 00:11:00.043 "dhgroup": "ffdhe3072" 00:11:00.043 } 00:11:00.043 } 00:11:00.043 ]' 00:11:00.043 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:00.302 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:00.302 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:00.302 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:00.302 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:00.302 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:00.302 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:00.302 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:00.871 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTQzZmI4ZTEzZDM4NTQyZGFmZjE2YmJkODZiMzBkYzRmZTIwMWEzNDRkMTYyYWY4ODE5ZjNiYzlhM2NiOTZhMRbjLRI=: 00:11:00.871 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:03:NTQzZmI4ZTEzZDM4NTQyZGFmZjE2YmJkODZiMzBkYzRmZTIwMWEzNDRkMTYyYWY4ODE5ZjNiYzlhM2NiOTZhMRbjLRI=: 00:11:01.440 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:01.440 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:01.440 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:11:01.440 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.440 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.440 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.440 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:01.440 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:01.440 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:01.440 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:01.698 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:11:01.698 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:01.698 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:01.698 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:01.698 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:01.698 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:01.698 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:01.698 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.698 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.698 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.698 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:01.698 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:01.698 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:01.957 00:11:01.957 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:01.957 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:01.957 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:02.526 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:02.526 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:02.526 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.526 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.526 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.526 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:02.526 { 00:11:02.526 "cntlid": 73, 00:11:02.526 "qid": 0, 00:11:02.526 "state": "enabled", 00:11:02.526 "thread": "nvmf_tgt_poll_group_000", 00:11:02.526 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:11:02.526 "listen_address": { 00:11:02.526 "trtype": "TCP", 00:11:02.526 "adrfam": "IPv4", 00:11:02.526 "traddr": "10.0.0.3", 00:11:02.526 "trsvcid": "4420" 00:11:02.526 }, 00:11:02.526 "peer_address": { 00:11:02.526 "trtype": "TCP", 00:11:02.526 "adrfam": "IPv4", 00:11:02.526 "traddr": "10.0.0.1", 00:11:02.526 "trsvcid": "48992" 00:11:02.526 }, 00:11:02.526 "auth": { 00:11:02.526 "state": "completed", 00:11:02.526 "digest": "sha384", 00:11:02.526 "dhgroup": "ffdhe4096" 00:11:02.526 } 00:11:02.526 } 00:11:02.526 ]' 00:11:02.526 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:02.526 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:02.526 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:02.526 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:02.526 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:02.526 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:02.526 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:02.526 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:02.784 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGUwNzE0YzY3OWYxM2FkYjkzMDc3Y2RkYWI0NTA1MzA1ODM5YTk5Mzg2N2I0NzVjpwCJUQ==: --dhchap-ctrl-secret DHHC-1:03:MTMxZmVlY2RhYTVjOWEwZDUwOTRjYTJmNzk3ZTdkNzJiM2I1M2QwMWFiNzc2MWJlZGYzMTJiYTliZTYwYTU0ZSb4+Oc=: 00:11:02.784 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:00:OGUwNzE0YzY3OWYxM2FkYjkzMDc3Y2RkYWI0NTA1MzA1ODM5YTk5Mzg2N2I0NzVjpwCJUQ==: --dhchap-ctrl-secret DHHC-1:03:MTMxZmVlY2RhYTVjOWEwZDUwOTRjYTJmNzk3ZTdkNzJiM2I1M2QwMWFiNzc2MWJlZGYzMTJiYTliZTYwYTU0ZSb4+Oc=: 00:11:03.721 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:03.721 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:03.721 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:11:03.721 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.721 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.721 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.721 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:03.721 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:03.721 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:03.721 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:11:03.721 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:03.721 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:03.721 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:03.721 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:03.721 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:03.721 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:03.721 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.721 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.721 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.721 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:03.721 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:03.721 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:04.290 00:11:04.290 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:04.290 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:04.290 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:04.549 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:04.549 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:04.550 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.550 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.550 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.550 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:04.550 { 00:11:04.550 "cntlid": 75, 00:11:04.550 "qid": 0, 00:11:04.550 "state": "enabled", 00:11:04.550 "thread": "nvmf_tgt_poll_group_000", 00:11:04.550 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:11:04.550 "listen_address": { 00:11:04.550 "trtype": "TCP", 00:11:04.550 "adrfam": "IPv4", 00:11:04.550 "traddr": "10.0.0.3", 00:11:04.550 "trsvcid": "4420" 00:11:04.550 }, 00:11:04.550 "peer_address": { 00:11:04.550 "trtype": "TCP", 00:11:04.550 "adrfam": "IPv4", 00:11:04.550 "traddr": "10.0.0.1", 00:11:04.550 "trsvcid": "49024" 00:11:04.550 }, 00:11:04.550 "auth": { 00:11:04.550 "state": "completed", 00:11:04.550 "digest": "sha384", 00:11:04.550 "dhgroup": "ffdhe4096" 00:11:04.550 } 00:11:04.550 } 00:11:04.550 ]' 00:11:04.550 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:04.550 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:04.550 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:04.809 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:04.809 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:04.809 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:04.809 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:04.809 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:05.067 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTg1MWIyY2JjZjdjNWE4ZTJlMGI0Mjk1YjcxZDMzYzFw9kV5: --dhchap-ctrl-secret DHHC-1:02:OWM0NGE2NDVkNzczZGUyNzgwZmQ3OTNhYTBkNmI2Mjg0ZjQzNjc1OTY5YzYwMmMyyPVorw==: 00:11:05.067 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:01:YTg1MWIyY2JjZjdjNWE4ZTJlMGI0Mjk1YjcxZDMzYzFw9kV5: --dhchap-ctrl-secret DHHC-1:02:OWM0NGE2NDVkNzczZGUyNzgwZmQ3OTNhYTBkNmI2Mjg0ZjQzNjc1OTY5YzYwMmMyyPVorw==: 00:11:05.635 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:05.635 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:05.635 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:11:05.635 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.635 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.894 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.894 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:05.894 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:05.894 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:06.152 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:11:06.152 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:06.152 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:06.152 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:06.152 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:06.152 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:06.152 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:06.152 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.152 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.152 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.152 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:06.153 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:06.153 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:06.411 00:11:06.411 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:06.411 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:06.411 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:06.670 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:06.670 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:06.670 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.670 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.670 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.670 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:06.670 { 00:11:06.670 "cntlid": 77, 00:11:06.670 "qid": 0, 00:11:06.670 "state": "enabled", 00:11:06.670 "thread": "nvmf_tgt_poll_group_000", 00:11:06.670 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:11:06.670 "listen_address": { 00:11:06.670 "trtype": "TCP", 00:11:06.670 "adrfam": "IPv4", 00:11:06.670 "traddr": "10.0.0.3", 00:11:06.670 "trsvcid": "4420" 00:11:06.670 }, 00:11:06.670 "peer_address": { 00:11:06.670 "trtype": "TCP", 00:11:06.670 "adrfam": "IPv4", 00:11:06.670 "traddr": "10.0.0.1", 00:11:06.670 "trsvcid": "49060" 00:11:06.670 }, 00:11:06.670 "auth": { 00:11:06.670 "state": "completed", 00:11:06.670 "digest": "sha384", 00:11:06.670 "dhgroup": "ffdhe4096" 00:11:06.670 } 00:11:06.670 } 00:11:06.670 ]' 00:11:06.670 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:06.670 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:06.670 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:06.930 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:06.930 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:06.930 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:06.930 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:06.930 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:07.188 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDlkMTBiMjNmMWJhMGI1MjNkMTE3YWE0ODU2MGJiNWUwODI0NzQ2MDJiMzU2YmNiHwH+NA==: --dhchap-ctrl-secret DHHC-1:01:MjUwY2E2OGIxZmNhNDI4Mzc5NmZmYTE4ODliMDA0ZDicMihN: 00:11:07.188 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:02:NDlkMTBiMjNmMWJhMGI1MjNkMTE3YWE0ODU2MGJiNWUwODI0NzQ2MDJiMzU2YmNiHwH+NA==: --dhchap-ctrl-secret DHHC-1:01:MjUwY2E2OGIxZmNhNDI4Mzc5NmZmYTE4ODliMDA0ZDicMihN: 00:11:08.125 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:08.125 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:08.125 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:11:08.125 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.125 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.125 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.125 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:08.125 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:08.125 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:08.125 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:11:08.125 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:08.125 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:08.125 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:08.125 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:08.125 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:08.125 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key3 00:11:08.125 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.125 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.125 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.125 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:08.125 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:08.125 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:08.691 00:11:08.691 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:08.691 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:08.691 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:08.949 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:08.949 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:08.949 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.949 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.949 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.949 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:08.949 { 00:11:08.949 "cntlid": 79, 00:11:08.949 "qid": 0, 00:11:08.949 "state": "enabled", 00:11:08.949 "thread": "nvmf_tgt_poll_group_000", 00:11:08.949 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:11:08.949 "listen_address": { 00:11:08.949 "trtype": "TCP", 00:11:08.949 "adrfam": "IPv4", 00:11:08.949 "traddr": "10.0.0.3", 00:11:08.949 "trsvcid": "4420" 00:11:08.949 }, 00:11:08.949 "peer_address": { 00:11:08.949 "trtype": "TCP", 00:11:08.949 "adrfam": "IPv4", 00:11:08.949 "traddr": "10.0.0.1", 00:11:08.949 "trsvcid": "47638" 00:11:08.949 }, 00:11:08.949 "auth": { 00:11:08.949 "state": "completed", 00:11:08.949 "digest": "sha384", 00:11:08.949 "dhgroup": "ffdhe4096" 00:11:08.949 } 00:11:08.949 } 00:11:08.949 ]' 00:11:08.949 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:08.949 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:08.949 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:08.949 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:08.949 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:09.206 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:09.206 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:09.206 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:09.464 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTQzZmI4ZTEzZDM4NTQyZGFmZjE2YmJkODZiMzBkYzRmZTIwMWEzNDRkMTYyYWY4ODE5ZjNiYzlhM2NiOTZhMRbjLRI=: 00:11:09.464 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:03:NTQzZmI4ZTEzZDM4NTQyZGFmZjE2YmJkODZiMzBkYzRmZTIwMWEzNDRkMTYyYWY4ODE5ZjNiYzlhM2NiOTZhMRbjLRI=: 00:11:10.031 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:10.031 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:10.031 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:11:10.031 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.032 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.032 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.032 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:10.032 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:10.032 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:10.032 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:10.319 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:11:10.319 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:10.319 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:10.319 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:10.319 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:10.319 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:10.319 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:10.319 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.319 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.319 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.319 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:10.319 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:10.319 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:10.886 00:11:10.886 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:10.886 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:10.886 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:11.144 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:11.144 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:11.144 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.144 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.144 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.144 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:11.144 { 00:11:11.144 "cntlid": 81, 00:11:11.144 "qid": 0, 00:11:11.144 "state": "enabled", 00:11:11.144 "thread": "nvmf_tgt_poll_group_000", 00:11:11.144 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:11:11.144 "listen_address": { 00:11:11.144 "trtype": "TCP", 00:11:11.144 "adrfam": "IPv4", 00:11:11.144 "traddr": "10.0.0.3", 00:11:11.144 "trsvcid": "4420" 00:11:11.144 }, 00:11:11.144 "peer_address": { 00:11:11.144 "trtype": "TCP", 00:11:11.144 "adrfam": "IPv4", 00:11:11.144 "traddr": "10.0.0.1", 00:11:11.144 "trsvcid": "47660" 00:11:11.144 }, 00:11:11.144 "auth": { 00:11:11.144 "state": "completed", 00:11:11.144 "digest": "sha384", 00:11:11.144 "dhgroup": "ffdhe6144" 00:11:11.144 } 00:11:11.144 } 00:11:11.144 ]' 00:11:11.144 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:11.144 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:11.144 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:11.144 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:11.144 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:11.403 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:11.403 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:11.403 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:11.662 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGUwNzE0YzY3OWYxM2FkYjkzMDc3Y2RkYWI0NTA1MzA1ODM5YTk5Mzg2N2I0NzVjpwCJUQ==: --dhchap-ctrl-secret DHHC-1:03:MTMxZmVlY2RhYTVjOWEwZDUwOTRjYTJmNzk3ZTdkNzJiM2I1M2QwMWFiNzc2MWJlZGYzMTJiYTliZTYwYTU0ZSb4+Oc=: 00:11:11.662 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:00:OGUwNzE0YzY3OWYxM2FkYjkzMDc3Y2RkYWI0NTA1MzA1ODM5YTk5Mzg2N2I0NzVjpwCJUQ==: --dhchap-ctrl-secret DHHC-1:03:MTMxZmVlY2RhYTVjOWEwZDUwOTRjYTJmNzk3ZTdkNzJiM2I1M2QwMWFiNzc2MWJlZGYzMTJiYTliZTYwYTU0ZSb4+Oc=: 00:11:12.230 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:12.230 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:12.230 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:11:12.230 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.230 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.230 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.230 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:12.230 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:12.230 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:12.798 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:11:12.798 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:12.798 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:12.798 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:12.798 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:12.798 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:12.798 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:12.798 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.798 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.798 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.798 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:12.798 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:12.798 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:13.058 00:11:13.058 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:13.058 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:13.058 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:13.625 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:13.625 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:13.625 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.625 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.625 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.625 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:13.625 { 00:11:13.625 "cntlid": 83, 00:11:13.625 "qid": 0, 00:11:13.625 "state": "enabled", 00:11:13.625 "thread": "nvmf_tgt_poll_group_000", 00:11:13.625 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:11:13.625 "listen_address": { 00:11:13.625 "trtype": "TCP", 00:11:13.625 "adrfam": "IPv4", 00:11:13.625 "traddr": "10.0.0.3", 00:11:13.625 "trsvcid": "4420" 00:11:13.625 }, 00:11:13.625 "peer_address": { 00:11:13.625 "trtype": "TCP", 00:11:13.625 "adrfam": "IPv4", 00:11:13.625 "traddr": "10.0.0.1", 00:11:13.625 "trsvcid": "47694" 00:11:13.625 }, 00:11:13.625 "auth": { 00:11:13.625 "state": "completed", 00:11:13.625 "digest": "sha384", 00:11:13.625 "dhgroup": "ffdhe6144" 00:11:13.625 } 00:11:13.625 } 00:11:13.625 ]' 00:11:13.625 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:13.625 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:13.625 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:13.625 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:13.625 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:13.625 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:13.625 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:13.625 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:13.884 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTg1MWIyY2JjZjdjNWE4ZTJlMGI0Mjk1YjcxZDMzYzFw9kV5: --dhchap-ctrl-secret DHHC-1:02:OWM0NGE2NDVkNzczZGUyNzgwZmQ3OTNhYTBkNmI2Mjg0ZjQzNjc1OTY5YzYwMmMyyPVorw==: 00:11:13.884 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:01:YTg1MWIyY2JjZjdjNWE4ZTJlMGI0Mjk1YjcxZDMzYzFw9kV5: --dhchap-ctrl-secret DHHC-1:02:OWM0NGE2NDVkNzczZGUyNzgwZmQ3OTNhYTBkNmI2Mjg0ZjQzNjc1OTY5YzYwMmMyyPVorw==: 00:11:14.819 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:14.819 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:14.819 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:11:14.819 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.819 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.819 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.819 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:14.819 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:14.819 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:15.078 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:11:15.078 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:15.078 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:15.078 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:15.078 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:15.078 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:15.078 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:15.078 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.078 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.078 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.078 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:15.078 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:15.078 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:15.645 00:11:15.645 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:15.645 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:15.645 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:15.904 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:15.904 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:15.904 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.904 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.904 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.904 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:15.904 { 00:11:15.904 "cntlid": 85, 00:11:15.904 "qid": 0, 00:11:15.904 "state": "enabled", 00:11:15.904 "thread": "nvmf_tgt_poll_group_000", 00:11:15.904 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:11:15.904 "listen_address": { 00:11:15.904 "trtype": "TCP", 00:11:15.904 "adrfam": "IPv4", 00:11:15.904 "traddr": "10.0.0.3", 00:11:15.904 "trsvcid": "4420" 00:11:15.904 }, 00:11:15.904 "peer_address": { 00:11:15.904 "trtype": "TCP", 00:11:15.904 "adrfam": "IPv4", 00:11:15.904 "traddr": "10.0.0.1", 00:11:15.904 "trsvcid": "47722" 00:11:15.904 }, 00:11:15.904 "auth": { 00:11:15.904 "state": "completed", 00:11:15.904 "digest": "sha384", 00:11:15.904 "dhgroup": "ffdhe6144" 00:11:15.904 } 00:11:15.904 } 00:11:15.904 ]' 00:11:15.904 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:15.904 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:15.904 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:15.904 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:15.904 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:15.904 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:15.904 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:15.904 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:16.163 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDlkMTBiMjNmMWJhMGI1MjNkMTE3YWE0ODU2MGJiNWUwODI0NzQ2MDJiMzU2YmNiHwH+NA==: --dhchap-ctrl-secret DHHC-1:01:MjUwY2E2OGIxZmNhNDI4Mzc5NmZmYTE4ODliMDA0ZDicMihN: 00:11:16.163 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:02:NDlkMTBiMjNmMWJhMGI1MjNkMTE3YWE0ODU2MGJiNWUwODI0NzQ2MDJiMzU2YmNiHwH+NA==: --dhchap-ctrl-secret DHHC-1:01:MjUwY2E2OGIxZmNhNDI4Mzc5NmZmYTE4ODliMDA0ZDicMihN: 00:11:17.098 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:17.098 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:17.098 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:11:17.098 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.098 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.098 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.098 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:17.098 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:17.098 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:17.098 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:11:17.098 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:17.098 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:17.098 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:17.098 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:17.099 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:17.099 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key3 00:11:17.099 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.099 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.099 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.099 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:17.099 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:17.099 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:17.666 00:11:17.666 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:17.666 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:17.666 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:17.924 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:17.924 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:17.924 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.924 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.924 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.924 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:17.924 { 00:11:17.924 "cntlid": 87, 00:11:17.924 "qid": 0, 00:11:17.924 "state": "enabled", 00:11:17.924 "thread": "nvmf_tgt_poll_group_000", 00:11:17.924 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:11:17.924 "listen_address": { 00:11:17.924 "trtype": "TCP", 00:11:17.924 "adrfam": "IPv4", 00:11:17.924 "traddr": "10.0.0.3", 00:11:17.924 "trsvcid": "4420" 00:11:17.924 }, 00:11:17.924 "peer_address": { 00:11:17.924 "trtype": "TCP", 00:11:17.924 "adrfam": "IPv4", 00:11:17.924 "traddr": "10.0.0.1", 00:11:17.924 "trsvcid": "47750" 00:11:17.924 }, 00:11:17.924 "auth": { 00:11:17.924 "state": "completed", 00:11:17.924 "digest": "sha384", 00:11:17.924 "dhgroup": "ffdhe6144" 00:11:17.924 } 00:11:17.924 } 00:11:17.924 ]' 00:11:17.924 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:17.924 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:17.924 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:18.184 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:18.184 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:18.184 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:18.184 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:18.184 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:18.442 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTQzZmI4ZTEzZDM4NTQyZGFmZjE2YmJkODZiMzBkYzRmZTIwMWEzNDRkMTYyYWY4ODE5ZjNiYzlhM2NiOTZhMRbjLRI=: 00:11:18.442 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:03:NTQzZmI4ZTEzZDM4NTQyZGFmZjE2YmJkODZiMzBkYzRmZTIwMWEzNDRkMTYyYWY4ODE5ZjNiYzlhM2NiOTZhMRbjLRI=: 00:11:19.128 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:19.387 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:19.387 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:11:19.387 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.387 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.387 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.387 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:19.387 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:19.387 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:19.387 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:19.646 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:11:19.646 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:19.646 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:19.646 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:19.646 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:19.646 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:19.646 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:19.646 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.646 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.646 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.646 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:19.646 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:19.646 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:20.214 00:11:20.214 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:20.214 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:20.214 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:20.473 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:20.473 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:20.473 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.473 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.473 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.473 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:20.473 { 00:11:20.473 "cntlid": 89, 00:11:20.473 "qid": 0, 00:11:20.473 "state": "enabled", 00:11:20.473 "thread": "nvmf_tgt_poll_group_000", 00:11:20.473 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:11:20.473 "listen_address": { 00:11:20.473 "trtype": "TCP", 00:11:20.473 "adrfam": "IPv4", 00:11:20.473 "traddr": "10.0.0.3", 00:11:20.473 "trsvcid": "4420" 00:11:20.473 }, 00:11:20.473 "peer_address": { 00:11:20.473 "trtype": "TCP", 00:11:20.473 "adrfam": "IPv4", 00:11:20.473 "traddr": "10.0.0.1", 00:11:20.473 "trsvcid": "39868" 00:11:20.473 }, 00:11:20.473 "auth": { 00:11:20.473 "state": "completed", 00:11:20.473 "digest": "sha384", 00:11:20.473 "dhgroup": "ffdhe8192" 00:11:20.473 } 00:11:20.473 } 00:11:20.473 ]' 00:11:20.473 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:20.732 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:20.732 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:20.732 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:20.732 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:20.732 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:20.732 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:20.732 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:20.991 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGUwNzE0YzY3OWYxM2FkYjkzMDc3Y2RkYWI0NTA1MzA1ODM5YTk5Mzg2N2I0NzVjpwCJUQ==: --dhchap-ctrl-secret DHHC-1:03:MTMxZmVlY2RhYTVjOWEwZDUwOTRjYTJmNzk3ZTdkNzJiM2I1M2QwMWFiNzc2MWJlZGYzMTJiYTliZTYwYTU0ZSb4+Oc=: 00:11:20.991 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:00:OGUwNzE0YzY3OWYxM2FkYjkzMDc3Y2RkYWI0NTA1MzA1ODM5YTk5Mzg2N2I0NzVjpwCJUQ==: --dhchap-ctrl-secret DHHC-1:03:MTMxZmVlY2RhYTVjOWEwZDUwOTRjYTJmNzk3ZTdkNzJiM2I1M2QwMWFiNzc2MWJlZGYzMTJiYTliZTYwYTU0ZSb4+Oc=: 00:11:21.559 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:21.559 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:21.559 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:11:21.559 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.559 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.559 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.559 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:21.559 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:21.560 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:22.127 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:11:22.127 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:22.127 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:22.127 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:22.127 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:22.127 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:22.127 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:22.127 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.127 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.127 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.127 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:22.127 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:22.128 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:22.695 00:11:22.695 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:22.695 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:22.695 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:22.953 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:22.953 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:22.953 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.953 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.953 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.953 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:22.953 { 00:11:22.953 "cntlid": 91, 00:11:22.953 "qid": 0, 00:11:22.953 "state": "enabled", 00:11:22.953 "thread": "nvmf_tgt_poll_group_000", 00:11:22.953 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:11:22.953 "listen_address": { 00:11:22.953 "trtype": "TCP", 00:11:22.953 "adrfam": "IPv4", 00:11:22.953 "traddr": "10.0.0.3", 00:11:22.953 "trsvcid": "4420" 00:11:22.953 }, 00:11:22.953 "peer_address": { 00:11:22.953 "trtype": "TCP", 00:11:22.953 "adrfam": "IPv4", 00:11:22.953 "traddr": "10.0.0.1", 00:11:22.953 "trsvcid": "39884" 00:11:22.953 }, 00:11:22.953 "auth": { 00:11:22.953 "state": "completed", 00:11:22.953 "digest": "sha384", 00:11:22.953 "dhgroup": "ffdhe8192" 00:11:22.953 } 00:11:22.953 } 00:11:22.953 ]' 00:11:22.953 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:22.953 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:22.953 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:22.953 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:22.953 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:23.212 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:23.212 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:23.212 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:23.471 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTg1MWIyY2JjZjdjNWE4ZTJlMGI0Mjk1YjcxZDMzYzFw9kV5: --dhchap-ctrl-secret DHHC-1:02:OWM0NGE2NDVkNzczZGUyNzgwZmQ3OTNhYTBkNmI2Mjg0ZjQzNjc1OTY5YzYwMmMyyPVorw==: 00:11:23.471 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:01:YTg1MWIyY2JjZjdjNWE4ZTJlMGI0Mjk1YjcxZDMzYzFw9kV5: --dhchap-ctrl-secret DHHC-1:02:OWM0NGE2NDVkNzczZGUyNzgwZmQ3OTNhYTBkNmI2Mjg0ZjQzNjc1OTY5YzYwMmMyyPVorw==: 00:11:24.043 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:24.043 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:24.043 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:11:24.043 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.043 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.043 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.043 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:24.043 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:24.043 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:24.302 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:11:24.302 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:24.302 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:24.302 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:24.302 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:24.302 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:24.302 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:24.302 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.302 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.302 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.302 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:24.302 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:24.302 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:24.870 00:11:24.870 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:24.870 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:24.870 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:25.129 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:25.129 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:25.129 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.129 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.129 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.129 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:25.129 { 00:11:25.129 "cntlid": 93, 00:11:25.129 "qid": 0, 00:11:25.129 "state": "enabled", 00:11:25.129 "thread": "nvmf_tgt_poll_group_000", 00:11:25.129 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:11:25.129 "listen_address": { 00:11:25.129 "trtype": "TCP", 00:11:25.129 "adrfam": "IPv4", 00:11:25.129 "traddr": "10.0.0.3", 00:11:25.129 "trsvcid": "4420" 00:11:25.129 }, 00:11:25.129 "peer_address": { 00:11:25.129 "trtype": "TCP", 00:11:25.129 "adrfam": "IPv4", 00:11:25.129 "traddr": "10.0.0.1", 00:11:25.129 "trsvcid": "39910" 00:11:25.129 }, 00:11:25.129 "auth": { 00:11:25.129 "state": "completed", 00:11:25.129 "digest": "sha384", 00:11:25.129 "dhgroup": "ffdhe8192" 00:11:25.129 } 00:11:25.129 } 00:11:25.129 ]' 00:11:25.129 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:25.388 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:25.388 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:25.388 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:25.388 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:25.388 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:25.388 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:25.388 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:25.647 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDlkMTBiMjNmMWJhMGI1MjNkMTE3YWE0ODU2MGJiNWUwODI0NzQ2MDJiMzU2YmNiHwH+NA==: --dhchap-ctrl-secret DHHC-1:01:MjUwY2E2OGIxZmNhNDI4Mzc5NmZmYTE4ODliMDA0ZDicMihN: 00:11:25.647 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:02:NDlkMTBiMjNmMWJhMGI1MjNkMTE3YWE0ODU2MGJiNWUwODI0NzQ2MDJiMzU2YmNiHwH+NA==: --dhchap-ctrl-secret DHHC-1:01:MjUwY2E2OGIxZmNhNDI4Mzc5NmZmYTE4ODliMDA0ZDicMihN: 00:11:26.583 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:26.583 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:26.583 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:11:26.583 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.583 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.583 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.583 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:26.583 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:26.583 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:26.583 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:11:26.583 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:26.584 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:26.584 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:26.584 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:26.584 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:26.584 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key3 00:11:26.584 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.584 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.843 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.843 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:26.843 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:26.843 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:27.411 00:11:27.411 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:27.411 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:27.411 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:27.670 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:27.670 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:27.670 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.670 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.670 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.671 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:27.671 { 00:11:27.671 "cntlid": 95, 00:11:27.671 "qid": 0, 00:11:27.671 "state": "enabled", 00:11:27.671 "thread": "nvmf_tgt_poll_group_000", 00:11:27.671 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:11:27.671 "listen_address": { 00:11:27.671 "trtype": "TCP", 00:11:27.671 "adrfam": "IPv4", 00:11:27.671 "traddr": "10.0.0.3", 00:11:27.671 "trsvcid": "4420" 00:11:27.671 }, 00:11:27.671 "peer_address": { 00:11:27.671 "trtype": "TCP", 00:11:27.671 "adrfam": "IPv4", 00:11:27.671 "traddr": "10.0.0.1", 00:11:27.671 "trsvcid": "39928" 00:11:27.671 }, 00:11:27.671 "auth": { 00:11:27.671 "state": "completed", 00:11:27.671 "digest": "sha384", 00:11:27.671 "dhgroup": "ffdhe8192" 00:11:27.671 } 00:11:27.671 } 00:11:27.671 ]' 00:11:27.671 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:27.671 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:27.671 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:27.930 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:27.930 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:27.930 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:27.930 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:27.930 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:28.189 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTQzZmI4ZTEzZDM4NTQyZGFmZjE2YmJkODZiMzBkYzRmZTIwMWEzNDRkMTYyYWY4ODE5ZjNiYzlhM2NiOTZhMRbjLRI=: 00:11:28.189 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:03:NTQzZmI4ZTEzZDM4NTQyZGFmZjE2YmJkODZiMzBkYzRmZTIwMWEzNDRkMTYyYWY4ODE5ZjNiYzlhM2NiOTZhMRbjLRI=: 00:11:29.125 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:29.125 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:29.125 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:11:29.125 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.125 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.125 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.125 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:11:29.125 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:29.125 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:29.125 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:29.125 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:29.125 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:11:29.125 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:29.125 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:29.125 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:29.125 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:29.125 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:29.126 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:29.126 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.126 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.126 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.126 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:29.126 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:29.126 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:29.693 00:11:29.693 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:29.693 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:29.693 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:29.953 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:29.953 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:29.953 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.953 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.953 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.953 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:29.953 { 00:11:29.953 "cntlid": 97, 00:11:29.953 "qid": 0, 00:11:29.953 "state": "enabled", 00:11:29.953 "thread": "nvmf_tgt_poll_group_000", 00:11:29.953 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:11:29.953 "listen_address": { 00:11:29.953 "trtype": "TCP", 00:11:29.953 "adrfam": "IPv4", 00:11:29.953 "traddr": "10.0.0.3", 00:11:29.953 "trsvcid": "4420" 00:11:29.953 }, 00:11:29.953 "peer_address": { 00:11:29.953 "trtype": "TCP", 00:11:29.953 "adrfam": "IPv4", 00:11:29.953 "traddr": "10.0.0.1", 00:11:29.953 "trsvcid": "34844" 00:11:29.953 }, 00:11:29.953 "auth": { 00:11:29.953 "state": "completed", 00:11:29.953 "digest": "sha512", 00:11:29.953 "dhgroup": "null" 00:11:29.953 } 00:11:29.953 } 00:11:29.953 ]' 00:11:29.953 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:29.953 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:29.953 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:29.953 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:29.953 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:29.953 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:29.953 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:29.953 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:30.522 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGUwNzE0YzY3OWYxM2FkYjkzMDc3Y2RkYWI0NTA1MzA1ODM5YTk5Mzg2N2I0NzVjpwCJUQ==: --dhchap-ctrl-secret DHHC-1:03:MTMxZmVlY2RhYTVjOWEwZDUwOTRjYTJmNzk3ZTdkNzJiM2I1M2QwMWFiNzc2MWJlZGYzMTJiYTliZTYwYTU0ZSb4+Oc=: 00:11:30.522 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:00:OGUwNzE0YzY3OWYxM2FkYjkzMDc3Y2RkYWI0NTA1MzA1ODM5YTk5Mzg2N2I0NzVjpwCJUQ==: --dhchap-ctrl-secret DHHC-1:03:MTMxZmVlY2RhYTVjOWEwZDUwOTRjYTJmNzk3ZTdkNzJiM2I1M2QwMWFiNzc2MWJlZGYzMTJiYTliZTYwYTU0ZSb4+Oc=: 00:11:31.093 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:31.093 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:31.093 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:11:31.093 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.093 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.093 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.093 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:31.093 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:31.093 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:31.352 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:11:31.352 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:31.352 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:31.352 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:31.352 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:31.352 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:31.352 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:31.353 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.353 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.353 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.353 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:31.353 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:31.353 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:31.920 00:11:31.920 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:31.920 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:31.920 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:32.180 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:32.180 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:32.180 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.180 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.180 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.180 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:32.180 { 00:11:32.180 "cntlid": 99, 00:11:32.180 "qid": 0, 00:11:32.180 "state": "enabled", 00:11:32.180 "thread": "nvmf_tgt_poll_group_000", 00:11:32.180 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:11:32.180 "listen_address": { 00:11:32.180 "trtype": "TCP", 00:11:32.180 "adrfam": "IPv4", 00:11:32.180 "traddr": "10.0.0.3", 00:11:32.180 "trsvcid": "4420" 00:11:32.180 }, 00:11:32.180 "peer_address": { 00:11:32.180 "trtype": "TCP", 00:11:32.180 "adrfam": "IPv4", 00:11:32.180 "traddr": "10.0.0.1", 00:11:32.180 "trsvcid": "34870" 00:11:32.180 }, 00:11:32.180 "auth": { 00:11:32.180 "state": "completed", 00:11:32.180 "digest": "sha512", 00:11:32.180 "dhgroup": "null" 00:11:32.180 } 00:11:32.180 } 00:11:32.180 ]' 00:11:32.180 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:32.180 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:32.180 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:32.180 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:32.180 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:32.180 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:32.180 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:32.180 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:32.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTg1MWIyY2JjZjdjNWE4ZTJlMGI0Mjk1YjcxZDMzYzFw9kV5: --dhchap-ctrl-secret DHHC-1:02:OWM0NGE2NDVkNzczZGUyNzgwZmQ3OTNhYTBkNmI2Mjg0ZjQzNjc1OTY5YzYwMmMyyPVorw==: 00:11:32.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:01:YTg1MWIyY2JjZjdjNWE4ZTJlMGI0Mjk1YjcxZDMzYzFw9kV5: --dhchap-ctrl-secret DHHC-1:02:OWM0NGE2NDVkNzczZGUyNzgwZmQ3OTNhYTBkNmI2Mjg0ZjQzNjc1OTY5YzYwMmMyyPVorw==: 00:11:33.319 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:33.319 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:33.319 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:11:33.319 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.319 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.319 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.319 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:33.319 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:33.319 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:33.578 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:11:33.578 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:33.578 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:33.578 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:33.578 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:33.578 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:33.578 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:33.578 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.578 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.578 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.578 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:33.578 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:33.578 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:33.838 00:11:33.838 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:33.838 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:33.838 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:34.406 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:34.406 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:34.406 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.406 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.406 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.406 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:34.406 { 00:11:34.406 "cntlid": 101, 00:11:34.406 "qid": 0, 00:11:34.406 "state": "enabled", 00:11:34.406 "thread": "nvmf_tgt_poll_group_000", 00:11:34.406 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:11:34.406 "listen_address": { 00:11:34.406 "trtype": "TCP", 00:11:34.406 "adrfam": "IPv4", 00:11:34.406 "traddr": "10.0.0.3", 00:11:34.406 "trsvcid": "4420" 00:11:34.406 }, 00:11:34.406 "peer_address": { 00:11:34.406 "trtype": "TCP", 00:11:34.406 "adrfam": "IPv4", 00:11:34.406 "traddr": "10.0.0.1", 00:11:34.406 "trsvcid": "34902" 00:11:34.406 }, 00:11:34.406 "auth": { 00:11:34.406 "state": "completed", 00:11:34.406 "digest": "sha512", 00:11:34.406 "dhgroup": "null" 00:11:34.406 } 00:11:34.406 } 00:11:34.406 ]' 00:11:34.406 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:34.406 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:34.406 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:34.406 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:34.406 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:34.406 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:34.406 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:34.406 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:34.665 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDlkMTBiMjNmMWJhMGI1MjNkMTE3YWE0ODU2MGJiNWUwODI0NzQ2MDJiMzU2YmNiHwH+NA==: --dhchap-ctrl-secret DHHC-1:01:MjUwY2E2OGIxZmNhNDI4Mzc5NmZmYTE4ODliMDA0ZDicMihN: 00:11:34.665 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:02:NDlkMTBiMjNmMWJhMGI1MjNkMTE3YWE0ODU2MGJiNWUwODI0NzQ2MDJiMzU2YmNiHwH+NA==: --dhchap-ctrl-secret DHHC-1:01:MjUwY2E2OGIxZmNhNDI4Mzc5NmZmYTE4ODliMDA0ZDicMihN: 00:11:35.602 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:35.602 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:35.602 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:11:35.602 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.602 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.602 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.602 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:35.602 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:35.602 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:35.862 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:11:35.862 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:35.862 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:35.862 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:35.862 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:35.862 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:35.862 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key3 00:11:35.862 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.862 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.862 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.862 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:35.862 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:35.862 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:36.121 00:11:36.121 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:36.121 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:36.121 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:36.379 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:36.379 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:36.379 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.379 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.379 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.379 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:36.379 { 00:11:36.379 "cntlid": 103, 00:11:36.379 "qid": 0, 00:11:36.379 "state": "enabled", 00:11:36.379 "thread": "nvmf_tgt_poll_group_000", 00:11:36.379 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:11:36.379 "listen_address": { 00:11:36.379 "trtype": "TCP", 00:11:36.379 "adrfam": "IPv4", 00:11:36.379 "traddr": "10.0.0.3", 00:11:36.379 "trsvcid": "4420" 00:11:36.379 }, 00:11:36.379 "peer_address": { 00:11:36.379 "trtype": "TCP", 00:11:36.379 "adrfam": "IPv4", 00:11:36.379 "traddr": "10.0.0.1", 00:11:36.379 "trsvcid": "34932" 00:11:36.379 }, 00:11:36.379 "auth": { 00:11:36.379 "state": "completed", 00:11:36.379 "digest": "sha512", 00:11:36.379 "dhgroup": "null" 00:11:36.379 } 00:11:36.379 } 00:11:36.379 ]' 00:11:36.379 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:36.638 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:36.638 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:36.638 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:36.638 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:36.638 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:36.638 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:36.638 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:36.897 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTQzZmI4ZTEzZDM4NTQyZGFmZjE2YmJkODZiMzBkYzRmZTIwMWEzNDRkMTYyYWY4ODE5ZjNiYzlhM2NiOTZhMRbjLRI=: 00:11:36.897 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:03:NTQzZmI4ZTEzZDM4NTQyZGFmZjE2YmJkODZiMzBkYzRmZTIwMWEzNDRkMTYyYWY4ODE5ZjNiYzlhM2NiOTZhMRbjLRI=: 00:11:37.833 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:37.833 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:37.833 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:11:37.833 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.833 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.833 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.833 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:37.833 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:37.833 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:37.833 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:38.092 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:11:38.092 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:38.092 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:38.092 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:38.092 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:38.092 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:38.092 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:38.092 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.093 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.093 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.093 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:38.093 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:38.093 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:38.352 00:11:38.352 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:38.352 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:38.352 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:38.610 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:38.610 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:38.610 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.610 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.610 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.610 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:38.610 { 00:11:38.610 "cntlid": 105, 00:11:38.610 "qid": 0, 00:11:38.610 "state": "enabled", 00:11:38.610 "thread": "nvmf_tgt_poll_group_000", 00:11:38.610 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:11:38.610 "listen_address": { 00:11:38.610 "trtype": "TCP", 00:11:38.610 "adrfam": "IPv4", 00:11:38.610 "traddr": "10.0.0.3", 00:11:38.610 "trsvcid": "4420" 00:11:38.610 }, 00:11:38.610 "peer_address": { 00:11:38.610 "trtype": "TCP", 00:11:38.610 "adrfam": "IPv4", 00:11:38.610 "traddr": "10.0.0.1", 00:11:38.610 "trsvcid": "36714" 00:11:38.610 }, 00:11:38.610 "auth": { 00:11:38.610 "state": "completed", 00:11:38.610 "digest": "sha512", 00:11:38.610 "dhgroup": "ffdhe2048" 00:11:38.610 } 00:11:38.610 } 00:11:38.610 ]' 00:11:38.610 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:38.610 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:38.610 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:38.868 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:38.868 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:38.868 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:38.868 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:38.868 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:39.126 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGUwNzE0YzY3OWYxM2FkYjkzMDc3Y2RkYWI0NTA1MzA1ODM5YTk5Mzg2N2I0NzVjpwCJUQ==: --dhchap-ctrl-secret DHHC-1:03:MTMxZmVlY2RhYTVjOWEwZDUwOTRjYTJmNzk3ZTdkNzJiM2I1M2QwMWFiNzc2MWJlZGYzMTJiYTliZTYwYTU0ZSb4+Oc=: 00:11:39.126 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:00:OGUwNzE0YzY3OWYxM2FkYjkzMDc3Y2RkYWI0NTA1MzA1ODM5YTk5Mzg2N2I0NzVjpwCJUQ==: --dhchap-ctrl-secret DHHC-1:03:MTMxZmVlY2RhYTVjOWEwZDUwOTRjYTJmNzk3ZTdkNzJiM2I1M2QwMWFiNzc2MWJlZGYzMTJiYTliZTYwYTU0ZSb4+Oc=: 00:11:39.693 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:39.693 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:39.693 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:11:39.693 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.693 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.693 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.693 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:39.693 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:39.693 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:40.261 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:11:40.261 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:40.261 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:40.261 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:40.261 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:40.261 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:40.261 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:40.261 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.261 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.261 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.261 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:40.261 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:40.261 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:40.529 00:11:40.529 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:40.529 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:40.529 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:40.796 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:40.796 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:40.796 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.796 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.796 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.796 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:40.796 { 00:11:40.796 "cntlid": 107, 00:11:40.796 "qid": 0, 00:11:40.796 "state": "enabled", 00:11:40.796 "thread": "nvmf_tgt_poll_group_000", 00:11:40.796 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:11:40.796 "listen_address": { 00:11:40.796 "trtype": "TCP", 00:11:40.796 "adrfam": "IPv4", 00:11:40.796 "traddr": "10.0.0.3", 00:11:40.796 "trsvcid": "4420" 00:11:40.796 }, 00:11:40.796 "peer_address": { 00:11:40.796 "trtype": "TCP", 00:11:40.796 "adrfam": "IPv4", 00:11:40.796 "traddr": "10.0.0.1", 00:11:40.796 "trsvcid": "36750" 00:11:40.796 }, 00:11:40.796 "auth": { 00:11:40.796 "state": "completed", 00:11:40.796 "digest": "sha512", 00:11:40.796 "dhgroup": "ffdhe2048" 00:11:40.796 } 00:11:40.796 } 00:11:40.796 ]' 00:11:40.796 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:40.796 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:40.796 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:40.796 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:40.796 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:40.796 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:40.796 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:40.796 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:41.364 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTg1MWIyY2JjZjdjNWE4ZTJlMGI0Mjk1YjcxZDMzYzFw9kV5: --dhchap-ctrl-secret DHHC-1:02:OWM0NGE2NDVkNzczZGUyNzgwZmQ3OTNhYTBkNmI2Mjg0ZjQzNjc1OTY5YzYwMmMyyPVorw==: 00:11:41.364 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:01:YTg1MWIyY2JjZjdjNWE4ZTJlMGI0Mjk1YjcxZDMzYzFw9kV5: --dhchap-ctrl-secret DHHC-1:02:OWM0NGE2NDVkNzczZGUyNzgwZmQ3OTNhYTBkNmI2Mjg0ZjQzNjc1OTY5YzYwMmMyyPVorw==: 00:11:41.932 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:41.932 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:41.932 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:11:41.932 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.932 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.932 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.932 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:41.932 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:41.932 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:42.191 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:11:42.191 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:42.191 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:42.191 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:42.191 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:42.191 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:42.191 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:42.191 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.191 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.191 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.191 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:42.191 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:42.191 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:42.450 00:11:42.450 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:42.450 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:42.450 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:42.710 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:42.710 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:42.710 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.710 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.710 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.710 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:42.710 { 00:11:42.710 "cntlid": 109, 00:11:42.710 "qid": 0, 00:11:42.710 "state": "enabled", 00:11:42.710 "thread": "nvmf_tgt_poll_group_000", 00:11:42.710 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:11:42.710 "listen_address": { 00:11:42.710 "trtype": "TCP", 00:11:42.710 "adrfam": "IPv4", 00:11:42.710 "traddr": "10.0.0.3", 00:11:42.710 "trsvcid": "4420" 00:11:42.710 }, 00:11:42.710 "peer_address": { 00:11:42.710 "trtype": "TCP", 00:11:42.710 "adrfam": "IPv4", 00:11:42.710 "traddr": "10.0.0.1", 00:11:42.710 "trsvcid": "36782" 00:11:42.710 }, 00:11:42.710 "auth": { 00:11:42.710 "state": "completed", 00:11:42.710 "digest": "sha512", 00:11:42.710 "dhgroup": "ffdhe2048" 00:11:42.710 } 00:11:42.710 } 00:11:42.710 ]' 00:11:42.710 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:42.969 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:42.969 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:42.969 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:42.969 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:42.969 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:42.969 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:42.969 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:43.228 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDlkMTBiMjNmMWJhMGI1MjNkMTE3YWE0ODU2MGJiNWUwODI0NzQ2MDJiMzU2YmNiHwH+NA==: --dhchap-ctrl-secret DHHC-1:01:MjUwY2E2OGIxZmNhNDI4Mzc5NmZmYTE4ODliMDA0ZDicMihN: 00:11:43.228 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:02:NDlkMTBiMjNmMWJhMGI1MjNkMTE3YWE0ODU2MGJiNWUwODI0NzQ2MDJiMzU2YmNiHwH+NA==: --dhchap-ctrl-secret DHHC-1:01:MjUwY2E2OGIxZmNhNDI4Mzc5NmZmYTE4ODliMDA0ZDicMihN: 00:11:44.164 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:44.164 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:44.164 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:11:44.164 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.164 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.164 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.164 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:44.164 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:44.164 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:44.424 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:11:44.424 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:44.424 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:44.424 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:44.424 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:44.424 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:44.424 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key3 00:11:44.424 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.424 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.424 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.424 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:44.424 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:44.424 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:44.683 00:11:44.683 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:44.683 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:44.683 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:44.942 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:44.942 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:44.942 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.942 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.942 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.942 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:44.942 { 00:11:44.942 "cntlid": 111, 00:11:44.942 "qid": 0, 00:11:44.942 "state": "enabled", 00:11:44.942 "thread": "nvmf_tgt_poll_group_000", 00:11:44.942 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:11:44.942 "listen_address": { 00:11:44.942 "trtype": "TCP", 00:11:44.942 "adrfam": "IPv4", 00:11:44.942 "traddr": "10.0.0.3", 00:11:44.942 "trsvcid": "4420" 00:11:44.942 }, 00:11:44.942 "peer_address": { 00:11:44.942 "trtype": "TCP", 00:11:44.942 "adrfam": "IPv4", 00:11:44.942 "traddr": "10.0.0.1", 00:11:44.942 "trsvcid": "36808" 00:11:44.942 }, 00:11:44.942 "auth": { 00:11:44.942 "state": "completed", 00:11:44.942 "digest": "sha512", 00:11:44.942 "dhgroup": "ffdhe2048" 00:11:44.942 } 00:11:44.942 } 00:11:44.942 ]' 00:11:44.942 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:44.942 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:44.942 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:45.201 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:45.201 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:45.201 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:45.201 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:45.201 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:45.459 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTQzZmI4ZTEzZDM4NTQyZGFmZjE2YmJkODZiMzBkYzRmZTIwMWEzNDRkMTYyYWY4ODE5ZjNiYzlhM2NiOTZhMRbjLRI=: 00:11:45.459 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:03:NTQzZmI4ZTEzZDM4NTQyZGFmZjE2YmJkODZiMzBkYzRmZTIwMWEzNDRkMTYyYWY4ODE5ZjNiYzlhM2NiOTZhMRbjLRI=: 00:11:46.027 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:46.027 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:46.027 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:11:46.027 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.027 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.027 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.027 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:46.027 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:46.027 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:46.027 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:46.287 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:11:46.287 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:46.287 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:46.287 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:46.287 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:46.287 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:46.287 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:46.287 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.287 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.287 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.287 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:46.287 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:46.287 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:46.854 00:11:46.854 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:46.854 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:46.854 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:46.854 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:46.854 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:46.854 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.854 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.854 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.854 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:46.854 { 00:11:46.854 "cntlid": 113, 00:11:46.854 "qid": 0, 00:11:46.854 "state": "enabled", 00:11:46.854 "thread": "nvmf_tgt_poll_group_000", 00:11:46.854 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:11:46.854 "listen_address": { 00:11:46.854 "trtype": "TCP", 00:11:46.854 "adrfam": "IPv4", 00:11:46.854 "traddr": "10.0.0.3", 00:11:46.854 "trsvcid": "4420" 00:11:46.854 }, 00:11:46.854 "peer_address": { 00:11:46.854 "trtype": "TCP", 00:11:46.854 "adrfam": "IPv4", 00:11:46.854 "traddr": "10.0.0.1", 00:11:46.854 "trsvcid": "36828" 00:11:46.854 }, 00:11:46.854 "auth": { 00:11:46.854 "state": "completed", 00:11:46.854 "digest": "sha512", 00:11:46.854 "dhgroup": "ffdhe3072" 00:11:46.854 } 00:11:46.854 } 00:11:46.854 ]' 00:11:46.854 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:47.113 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:47.113 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:47.113 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:47.113 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:47.113 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:47.113 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:47.113 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:47.373 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGUwNzE0YzY3OWYxM2FkYjkzMDc3Y2RkYWI0NTA1MzA1ODM5YTk5Mzg2N2I0NzVjpwCJUQ==: --dhchap-ctrl-secret DHHC-1:03:MTMxZmVlY2RhYTVjOWEwZDUwOTRjYTJmNzk3ZTdkNzJiM2I1M2QwMWFiNzc2MWJlZGYzMTJiYTliZTYwYTU0ZSb4+Oc=: 00:11:47.373 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:00:OGUwNzE0YzY3OWYxM2FkYjkzMDc3Y2RkYWI0NTA1MzA1ODM5YTk5Mzg2N2I0NzVjpwCJUQ==: --dhchap-ctrl-secret DHHC-1:03:MTMxZmVlY2RhYTVjOWEwZDUwOTRjYTJmNzk3ZTdkNzJiM2I1M2QwMWFiNzc2MWJlZGYzMTJiYTliZTYwYTU0ZSb4+Oc=: 00:11:48.310 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:48.310 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:48.310 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:11:48.310 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.310 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.310 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.310 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:48.310 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:48.310 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:48.310 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:11:48.310 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:48.310 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:48.311 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:48.311 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:48.311 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:48.311 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:48.311 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.311 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.311 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.311 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:48.311 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:48.311 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:48.878 00:11:48.878 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:48.878 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:48.878 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:49.137 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:49.137 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:49.137 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.137 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.137 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.137 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:49.137 { 00:11:49.137 "cntlid": 115, 00:11:49.137 "qid": 0, 00:11:49.137 "state": "enabled", 00:11:49.137 "thread": "nvmf_tgt_poll_group_000", 00:11:49.137 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:11:49.137 "listen_address": { 00:11:49.137 "trtype": "TCP", 00:11:49.137 "adrfam": "IPv4", 00:11:49.137 "traddr": "10.0.0.3", 00:11:49.137 "trsvcid": "4420" 00:11:49.137 }, 00:11:49.137 "peer_address": { 00:11:49.137 "trtype": "TCP", 00:11:49.137 "adrfam": "IPv4", 00:11:49.137 "traddr": "10.0.0.1", 00:11:49.137 "trsvcid": "33772" 00:11:49.137 }, 00:11:49.137 "auth": { 00:11:49.137 "state": "completed", 00:11:49.137 "digest": "sha512", 00:11:49.137 "dhgroup": "ffdhe3072" 00:11:49.137 } 00:11:49.137 } 00:11:49.137 ]' 00:11:49.137 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:49.137 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:49.137 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:49.137 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:49.137 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:49.137 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:49.137 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:49.137 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:49.396 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTg1MWIyY2JjZjdjNWE4ZTJlMGI0Mjk1YjcxZDMzYzFw9kV5: --dhchap-ctrl-secret DHHC-1:02:OWM0NGE2NDVkNzczZGUyNzgwZmQ3OTNhYTBkNmI2Mjg0ZjQzNjc1OTY5YzYwMmMyyPVorw==: 00:11:49.396 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:01:YTg1MWIyY2JjZjdjNWE4ZTJlMGI0Mjk1YjcxZDMzYzFw9kV5: --dhchap-ctrl-secret DHHC-1:02:OWM0NGE2NDVkNzczZGUyNzgwZmQ3OTNhYTBkNmI2Mjg0ZjQzNjc1OTY5YzYwMmMyyPVorw==: 00:11:50.062 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:50.322 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:50.322 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:11:50.322 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.322 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.322 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.322 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:50.322 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:50.322 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:50.581 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:11:50.581 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:50.581 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:50.581 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:50.581 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:50.581 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:50.581 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:50.581 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.581 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.581 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.581 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:50.581 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:50.581 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:50.840 00:11:50.840 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:50.840 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:50.840 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:51.100 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:51.100 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:51.100 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.100 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.100 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.100 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:51.100 { 00:11:51.100 "cntlid": 117, 00:11:51.100 "qid": 0, 00:11:51.100 "state": "enabled", 00:11:51.100 "thread": "nvmf_tgt_poll_group_000", 00:11:51.100 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:11:51.100 "listen_address": { 00:11:51.100 "trtype": "TCP", 00:11:51.100 "adrfam": "IPv4", 00:11:51.100 "traddr": "10.0.0.3", 00:11:51.100 "trsvcid": "4420" 00:11:51.100 }, 00:11:51.100 "peer_address": { 00:11:51.100 "trtype": "TCP", 00:11:51.100 "adrfam": "IPv4", 00:11:51.100 "traddr": "10.0.0.1", 00:11:51.100 "trsvcid": "33794" 00:11:51.100 }, 00:11:51.100 "auth": { 00:11:51.100 "state": "completed", 00:11:51.100 "digest": "sha512", 00:11:51.100 "dhgroup": "ffdhe3072" 00:11:51.100 } 00:11:51.100 } 00:11:51.100 ]' 00:11:51.100 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:51.360 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:51.360 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:51.360 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:51.360 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:51.360 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:51.360 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:51.360 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:51.619 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDlkMTBiMjNmMWJhMGI1MjNkMTE3YWE0ODU2MGJiNWUwODI0NzQ2MDJiMzU2YmNiHwH+NA==: --dhchap-ctrl-secret DHHC-1:01:MjUwY2E2OGIxZmNhNDI4Mzc5NmZmYTE4ODliMDA0ZDicMihN: 00:11:51.619 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:02:NDlkMTBiMjNmMWJhMGI1MjNkMTE3YWE0ODU2MGJiNWUwODI0NzQ2MDJiMzU2YmNiHwH+NA==: --dhchap-ctrl-secret DHHC-1:01:MjUwY2E2OGIxZmNhNDI4Mzc5NmZmYTE4ODliMDA0ZDicMihN: 00:11:52.555 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:52.555 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:52.556 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:11:52.556 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.556 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.556 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.556 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:52.556 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:52.556 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:52.815 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:11:52.815 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:52.815 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:52.815 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:52.815 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:52.815 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:52.815 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key3 00:11:52.815 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.815 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.815 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.815 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:52.815 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:52.815 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:53.074 00:11:53.074 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:53.074 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:53.074 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:53.335 10:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:53.335 10:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:53.335 10:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.335 10:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.335 10:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.335 10:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:53.335 { 00:11:53.335 "cntlid": 119, 00:11:53.335 "qid": 0, 00:11:53.335 "state": "enabled", 00:11:53.335 "thread": "nvmf_tgt_poll_group_000", 00:11:53.335 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:11:53.335 "listen_address": { 00:11:53.335 "trtype": "TCP", 00:11:53.335 "adrfam": "IPv4", 00:11:53.335 "traddr": "10.0.0.3", 00:11:53.335 "trsvcid": "4420" 00:11:53.335 }, 00:11:53.335 "peer_address": { 00:11:53.335 "trtype": "TCP", 00:11:53.335 "adrfam": "IPv4", 00:11:53.335 "traddr": "10.0.0.1", 00:11:53.335 "trsvcid": "33824" 00:11:53.335 }, 00:11:53.335 "auth": { 00:11:53.335 "state": "completed", 00:11:53.335 "digest": "sha512", 00:11:53.335 "dhgroup": "ffdhe3072" 00:11:53.335 } 00:11:53.335 } 00:11:53.335 ]' 00:11:53.335 10:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:53.594 10:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:53.594 10:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:53.594 10:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:53.594 10:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:53.595 10:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:53.595 10:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:53.595 10:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:53.853 10:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTQzZmI4ZTEzZDM4NTQyZGFmZjE2YmJkODZiMzBkYzRmZTIwMWEzNDRkMTYyYWY4ODE5ZjNiYzlhM2NiOTZhMRbjLRI=: 00:11:53.853 10:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:03:NTQzZmI4ZTEzZDM4NTQyZGFmZjE2YmJkODZiMzBkYzRmZTIwMWEzNDRkMTYyYWY4ODE5ZjNiYzlhM2NiOTZhMRbjLRI=: 00:11:54.787 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:54.787 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:54.787 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:11:54.787 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.787 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.787 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.788 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:54.788 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:54.788 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:54.788 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:55.046 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:11:55.046 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:55.046 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:55.046 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:55.046 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:55.046 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:55.046 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:55.046 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.046 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.046 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.046 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:55.046 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:55.046 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:55.306 00:11:55.306 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:55.306 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:55.306 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:55.875 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:55.875 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:55.875 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.875 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.875 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.875 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:55.875 { 00:11:55.875 "cntlid": 121, 00:11:55.875 "qid": 0, 00:11:55.875 "state": "enabled", 00:11:55.875 "thread": "nvmf_tgt_poll_group_000", 00:11:55.875 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:11:55.875 "listen_address": { 00:11:55.875 "trtype": "TCP", 00:11:55.875 "adrfam": "IPv4", 00:11:55.875 "traddr": "10.0.0.3", 00:11:55.875 "trsvcid": "4420" 00:11:55.875 }, 00:11:55.875 "peer_address": { 00:11:55.875 "trtype": "TCP", 00:11:55.875 "adrfam": "IPv4", 00:11:55.875 "traddr": "10.0.0.1", 00:11:55.875 "trsvcid": "33858" 00:11:55.875 }, 00:11:55.875 "auth": { 00:11:55.875 "state": "completed", 00:11:55.875 "digest": "sha512", 00:11:55.875 "dhgroup": "ffdhe4096" 00:11:55.875 } 00:11:55.875 } 00:11:55.875 ]' 00:11:55.875 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:55.875 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:55.875 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:55.875 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:55.875 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:55.875 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:55.875 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:55.876 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:56.134 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGUwNzE0YzY3OWYxM2FkYjkzMDc3Y2RkYWI0NTA1MzA1ODM5YTk5Mzg2N2I0NzVjpwCJUQ==: --dhchap-ctrl-secret DHHC-1:03:MTMxZmVlY2RhYTVjOWEwZDUwOTRjYTJmNzk3ZTdkNzJiM2I1M2QwMWFiNzc2MWJlZGYzMTJiYTliZTYwYTU0ZSb4+Oc=: 00:11:56.134 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:00:OGUwNzE0YzY3OWYxM2FkYjkzMDc3Y2RkYWI0NTA1MzA1ODM5YTk5Mzg2N2I0NzVjpwCJUQ==: --dhchap-ctrl-secret DHHC-1:03:MTMxZmVlY2RhYTVjOWEwZDUwOTRjYTJmNzk3ZTdkNzJiM2I1M2QwMWFiNzc2MWJlZGYzMTJiYTliZTYwYTU0ZSb4+Oc=: 00:11:57.072 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:57.072 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:57.072 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:11:57.072 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.072 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.072 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.072 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:57.072 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:57.072 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:57.331 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:11:57.331 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:57.331 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:57.331 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:57.331 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:57.331 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:57.331 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:57.331 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.331 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.331 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.331 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:57.331 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:57.331 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:57.590 00:11:57.590 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:57.590 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:57.590 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:57.849 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:57.849 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:57.849 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.849 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.849 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.849 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:57.849 { 00:11:57.849 "cntlid": 123, 00:11:57.849 "qid": 0, 00:11:57.849 "state": "enabled", 00:11:57.849 "thread": "nvmf_tgt_poll_group_000", 00:11:57.849 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:11:57.849 "listen_address": { 00:11:57.849 "trtype": "TCP", 00:11:57.849 "adrfam": "IPv4", 00:11:57.849 "traddr": "10.0.0.3", 00:11:57.849 "trsvcid": "4420" 00:11:57.849 }, 00:11:57.849 "peer_address": { 00:11:57.849 "trtype": "TCP", 00:11:57.849 "adrfam": "IPv4", 00:11:57.849 "traddr": "10.0.0.1", 00:11:57.849 "trsvcid": "33888" 00:11:57.849 }, 00:11:57.849 "auth": { 00:11:57.849 "state": "completed", 00:11:57.849 "digest": "sha512", 00:11:57.849 "dhgroup": "ffdhe4096" 00:11:57.849 } 00:11:57.849 } 00:11:57.849 ]' 00:11:57.849 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:57.849 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:57.849 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:58.109 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:58.109 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:58.109 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:58.109 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:58.109 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:58.368 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTg1MWIyY2JjZjdjNWE4ZTJlMGI0Mjk1YjcxZDMzYzFw9kV5: --dhchap-ctrl-secret DHHC-1:02:OWM0NGE2NDVkNzczZGUyNzgwZmQ3OTNhYTBkNmI2Mjg0ZjQzNjc1OTY5YzYwMmMyyPVorw==: 00:11:58.368 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:01:YTg1MWIyY2JjZjdjNWE4ZTJlMGI0Mjk1YjcxZDMzYzFw9kV5: --dhchap-ctrl-secret DHHC-1:02:OWM0NGE2NDVkNzczZGUyNzgwZmQ3OTNhYTBkNmI2Mjg0ZjQzNjc1OTY5YzYwMmMyyPVorw==: 00:11:59.358 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:59.358 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:59.358 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:11:59.358 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.358 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.358 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.358 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:59.358 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:59.358 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:59.358 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:11:59.358 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:59.358 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:59.358 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:59.358 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:59.358 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:59.358 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:59.358 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.358 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.358 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.358 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:59.358 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:59.358 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:59.925 00:11:59.925 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:59.925 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:59.925 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:00.184 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:00.184 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:00.184 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.184 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.184 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.184 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:00.184 { 00:12:00.184 "cntlid": 125, 00:12:00.184 "qid": 0, 00:12:00.184 "state": "enabled", 00:12:00.184 "thread": "nvmf_tgt_poll_group_000", 00:12:00.184 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:12:00.184 "listen_address": { 00:12:00.184 "trtype": "TCP", 00:12:00.184 "adrfam": "IPv4", 00:12:00.184 "traddr": "10.0.0.3", 00:12:00.184 "trsvcid": "4420" 00:12:00.184 }, 00:12:00.184 "peer_address": { 00:12:00.184 "trtype": "TCP", 00:12:00.184 "adrfam": "IPv4", 00:12:00.184 "traddr": "10.0.0.1", 00:12:00.184 "trsvcid": "42358" 00:12:00.184 }, 00:12:00.184 "auth": { 00:12:00.184 "state": "completed", 00:12:00.184 "digest": "sha512", 00:12:00.184 "dhgroup": "ffdhe4096" 00:12:00.184 } 00:12:00.184 } 00:12:00.184 ]' 00:12:00.184 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:00.184 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:00.184 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:00.442 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:00.442 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:00.442 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:00.442 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:00.443 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:00.700 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDlkMTBiMjNmMWJhMGI1MjNkMTE3YWE0ODU2MGJiNWUwODI0NzQ2MDJiMzU2YmNiHwH+NA==: --dhchap-ctrl-secret DHHC-1:01:MjUwY2E2OGIxZmNhNDI4Mzc5NmZmYTE4ODliMDA0ZDicMihN: 00:12:00.700 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:02:NDlkMTBiMjNmMWJhMGI1MjNkMTE3YWE0ODU2MGJiNWUwODI0NzQ2MDJiMzU2YmNiHwH+NA==: --dhchap-ctrl-secret DHHC-1:01:MjUwY2E2OGIxZmNhNDI4Mzc5NmZmYTE4ODliMDA0ZDicMihN: 00:12:01.267 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:01.267 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:01.267 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:12:01.267 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.267 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.267 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.267 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:01.267 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:01.267 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:01.834 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:12:01.834 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:01.834 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:01.834 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:01.834 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:01.834 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:01.834 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key3 00:12:01.834 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.834 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.834 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.834 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:01.834 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:01.834 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:02.093 00:12:02.093 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:02.093 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:02.093 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:02.351 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:02.351 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:02.351 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.351 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.351 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.351 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:02.351 { 00:12:02.351 "cntlid": 127, 00:12:02.351 "qid": 0, 00:12:02.351 "state": "enabled", 00:12:02.351 "thread": "nvmf_tgt_poll_group_000", 00:12:02.351 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:12:02.351 "listen_address": { 00:12:02.351 "trtype": "TCP", 00:12:02.351 "adrfam": "IPv4", 00:12:02.351 "traddr": "10.0.0.3", 00:12:02.351 "trsvcid": "4420" 00:12:02.351 }, 00:12:02.351 "peer_address": { 00:12:02.351 "trtype": "TCP", 00:12:02.351 "adrfam": "IPv4", 00:12:02.351 "traddr": "10.0.0.1", 00:12:02.352 "trsvcid": "42386" 00:12:02.352 }, 00:12:02.352 "auth": { 00:12:02.352 "state": "completed", 00:12:02.352 "digest": "sha512", 00:12:02.352 "dhgroup": "ffdhe4096" 00:12:02.352 } 00:12:02.352 } 00:12:02.352 ]' 00:12:02.352 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:02.352 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:02.352 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:02.610 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:02.610 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:02.610 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:02.610 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:02.610 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:02.869 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTQzZmI4ZTEzZDM4NTQyZGFmZjE2YmJkODZiMzBkYzRmZTIwMWEzNDRkMTYyYWY4ODE5ZjNiYzlhM2NiOTZhMRbjLRI=: 00:12:02.869 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:03:NTQzZmI4ZTEzZDM4NTQyZGFmZjE2YmJkODZiMzBkYzRmZTIwMWEzNDRkMTYyYWY4ODE5ZjNiYzlhM2NiOTZhMRbjLRI=: 00:12:03.806 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:03.806 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:03.806 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:12:03.806 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.806 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.806 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.806 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:03.806 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:03.806 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:03.806 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:03.806 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:12:03.806 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:03.806 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:03.806 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:03.806 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:03.806 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:03.806 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:03.806 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.806 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.806 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.806 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:03.806 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:03.806 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:04.387 00:12:04.387 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:04.387 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:04.387 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:04.645 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:04.645 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:04.645 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.645 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.645 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.645 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:04.645 { 00:12:04.645 "cntlid": 129, 00:12:04.645 "qid": 0, 00:12:04.645 "state": "enabled", 00:12:04.645 "thread": "nvmf_tgt_poll_group_000", 00:12:04.645 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:12:04.645 "listen_address": { 00:12:04.645 "trtype": "TCP", 00:12:04.645 "adrfam": "IPv4", 00:12:04.645 "traddr": "10.0.0.3", 00:12:04.645 "trsvcid": "4420" 00:12:04.645 }, 00:12:04.645 "peer_address": { 00:12:04.645 "trtype": "TCP", 00:12:04.645 "adrfam": "IPv4", 00:12:04.645 "traddr": "10.0.0.1", 00:12:04.645 "trsvcid": "42418" 00:12:04.645 }, 00:12:04.645 "auth": { 00:12:04.645 "state": "completed", 00:12:04.645 "digest": "sha512", 00:12:04.645 "dhgroup": "ffdhe6144" 00:12:04.645 } 00:12:04.645 } 00:12:04.645 ]' 00:12:04.645 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:04.645 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:04.645 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:04.904 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:04.904 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:04.904 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:04.904 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:04.904 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:05.162 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGUwNzE0YzY3OWYxM2FkYjkzMDc3Y2RkYWI0NTA1MzA1ODM5YTk5Mzg2N2I0NzVjpwCJUQ==: --dhchap-ctrl-secret DHHC-1:03:MTMxZmVlY2RhYTVjOWEwZDUwOTRjYTJmNzk3ZTdkNzJiM2I1M2QwMWFiNzc2MWJlZGYzMTJiYTliZTYwYTU0ZSb4+Oc=: 00:12:05.162 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:00:OGUwNzE0YzY3OWYxM2FkYjkzMDc3Y2RkYWI0NTA1MzA1ODM5YTk5Mzg2N2I0NzVjpwCJUQ==: --dhchap-ctrl-secret DHHC-1:03:MTMxZmVlY2RhYTVjOWEwZDUwOTRjYTJmNzk3ZTdkNzJiM2I1M2QwMWFiNzc2MWJlZGYzMTJiYTliZTYwYTU0ZSb4+Oc=: 00:12:05.729 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:05.729 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:05.729 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:12:05.729 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.729 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.729 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.729 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:05.729 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:05.729 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:05.988 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:12:05.988 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:05.988 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:05.989 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:05.989 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:05.989 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:05.989 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:05.989 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.989 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.989 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.989 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:05.989 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:05.989 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:06.593 00:12:06.593 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:06.593 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:06.593 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:06.853 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:06.853 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:06.853 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.853 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.853 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.853 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:06.853 { 00:12:06.853 "cntlid": 131, 00:12:06.853 "qid": 0, 00:12:06.853 "state": "enabled", 00:12:06.853 "thread": "nvmf_tgt_poll_group_000", 00:12:06.853 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:12:06.853 "listen_address": { 00:12:06.853 "trtype": "TCP", 00:12:06.853 "adrfam": "IPv4", 00:12:06.853 "traddr": "10.0.0.3", 00:12:06.853 "trsvcid": "4420" 00:12:06.853 }, 00:12:06.853 "peer_address": { 00:12:06.853 "trtype": "TCP", 00:12:06.853 "adrfam": "IPv4", 00:12:06.853 "traddr": "10.0.0.1", 00:12:06.853 "trsvcid": "42466" 00:12:06.853 }, 00:12:06.853 "auth": { 00:12:06.853 "state": "completed", 00:12:06.853 "digest": "sha512", 00:12:06.853 "dhgroup": "ffdhe6144" 00:12:06.853 } 00:12:06.853 } 00:12:06.853 ]' 00:12:06.853 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:06.853 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:06.853 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:06.853 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:06.853 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:07.112 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:07.112 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:07.112 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:07.371 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTg1MWIyY2JjZjdjNWE4ZTJlMGI0Mjk1YjcxZDMzYzFw9kV5: --dhchap-ctrl-secret DHHC-1:02:OWM0NGE2NDVkNzczZGUyNzgwZmQ3OTNhYTBkNmI2Mjg0ZjQzNjc1OTY5YzYwMmMyyPVorw==: 00:12:07.371 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:01:YTg1MWIyY2JjZjdjNWE4ZTJlMGI0Mjk1YjcxZDMzYzFw9kV5: --dhchap-ctrl-secret DHHC-1:02:OWM0NGE2NDVkNzczZGUyNzgwZmQ3OTNhYTBkNmI2Mjg0ZjQzNjc1OTY5YzYwMmMyyPVorw==: 00:12:07.939 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:07.939 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:07.939 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:12:07.939 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.939 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.940 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.940 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:07.940 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:07.940 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:08.199 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:12:08.199 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:08.199 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:08.199 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:08.199 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:08.199 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:08.458 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:08.458 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.458 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.458 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.458 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:08.458 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:08.458 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:08.718 00:12:08.719 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:08.719 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:08.719 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:09.286 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:09.286 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:09.286 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.286 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.286 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.286 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:09.286 { 00:12:09.286 "cntlid": 133, 00:12:09.286 "qid": 0, 00:12:09.286 "state": "enabled", 00:12:09.286 "thread": "nvmf_tgt_poll_group_000", 00:12:09.286 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:12:09.286 "listen_address": { 00:12:09.286 "trtype": "TCP", 00:12:09.286 "adrfam": "IPv4", 00:12:09.286 "traddr": "10.0.0.3", 00:12:09.286 "trsvcid": "4420" 00:12:09.286 }, 00:12:09.286 "peer_address": { 00:12:09.286 "trtype": "TCP", 00:12:09.286 "adrfam": "IPv4", 00:12:09.286 "traddr": "10.0.0.1", 00:12:09.286 "trsvcid": "41364" 00:12:09.286 }, 00:12:09.286 "auth": { 00:12:09.286 "state": "completed", 00:12:09.286 "digest": "sha512", 00:12:09.286 "dhgroup": "ffdhe6144" 00:12:09.286 } 00:12:09.286 } 00:12:09.286 ]' 00:12:09.286 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:09.286 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:09.286 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:09.286 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:09.286 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:09.286 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:09.286 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:09.286 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:09.545 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDlkMTBiMjNmMWJhMGI1MjNkMTE3YWE0ODU2MGJiNWUwODI0NzQ2MDJiMzU2YmNiHwH+NA==: --dhchap-ctrl-secret DHHC-1:01:MjUwY2E2OGIxZmNhNDI4Mzc5NmZmYTE4ODliMDA0ZDicMihN: 00:12:09.545 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:02:NDlkMTBiMjNmMWJhMGI1MjNkMTE3YWE0ODU2MGJiNWUwODI0NzQ2MDJiMzU2YmNiHwH+NA==: --dhchap-ctrl-secret DHHC-1:01:MjUwY2E2OGIxZmNhNDI4Mzc5NmZmYTE4ODliMDA0ZDicMihN: 00:12:10.112 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:10.112 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:10.112 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:12:10.112 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.112 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.112 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.112 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:10.112 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:10.112 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:10.371 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:12:10.371 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:10.371 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:10.371 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:10.371 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:10.371 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:10.371 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key3 00:12:10.371 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.371 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.633 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.633 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:10.633 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:10.633 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:10.893 00:12:10.893 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:10.893 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:10.893 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:11.460 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:11.460 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:11.460 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.460 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.460 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.460 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:11.460 { 00:12:11.460 "cntlid": 135, 00:12:11.460 "qid": 0, 00:12:11.460 "state": "enabled", 00:12:11.460 "thread": "nvmf_tgt_poll_group_000", 00:12:11.460 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:12:11.460 "listen_address": { 00:12:11.460 "trtype": "TCP", 00:12:11.460 "adrfam": "IPv4", 00:12:11.460 "traddr": "10.0.0.3", 00:12:11.460 "trsvcid": "4420" 00:12:11.460 }, 00:12:11.460 "peer_address": { 00:12:11.460 "trtype": "TCP", 00:12:11.460 "adrfam": "IPv4", 00:12:11.460 "traddr": "10.0.0.1", 00:12:11.460 "trsvcid": "41394" 00:12:11.460 }, 00:12:11.460 "auth": { 00:12:11.460 "state": "completed", 00:12:11.460 "digest": "sha512", 00:12:11.460 "dhgroup": "ffdhe6144" 00:12:11.460 } 00:12:11.460 } 00:12:11.460 ]' 00:12:11.460 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:11.460 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:11.460 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:11.460 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:11.460 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:11.460 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:11.460 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:11.460 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:11.718 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTQzZmI4ZTEzZDM4NTQyZGFmZjE2YmJkODZiMzBkYzRmZTIwMWEzNDRkMTYyYWY4ODE5ZjNiYzlhM2NiOTZhMRbjLRI=: 00:12:11.718 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:03:NTQzZmI4ZTEzZDM4NTQyZGFmZjE2YmJkODZiMzBkYzRmZTIwMWEzNDRkMTYyYWY4ODE5ZjNiYzlhM2NiOTZhMRbjLRI=: 00:12:12.285 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:12.285 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:12.285 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:12:12.285 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.285 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.285 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.285 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:12.285 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:12.285 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:12.285 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:12.544 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:12:12.544 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:12.544 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:12.544 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:12.544 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:12.544 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:12.544 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:12.544 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.544 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.544 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.544 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:12.544 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:12.544 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:13.111 00:12:13.370 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:13.370 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:13.370 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:13.686 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:13.686 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:13.686 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.686 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.686 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.687 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:13.687 { 00:12:13.687 "cntlid": 137, 00:12:13.687 "qid": 0, 00:12:13.687 "state": "enabled", 00:12:13.687 "thread": "nvmf_tgt_poll_group_000", 00:12:13.687 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:12:13.687 "listen_address": { 00:12:13.687 "trtype": "TCP", 00:12:13.687 "adrfam": "IPv4", 00:12:13.687 "traddr": "10.0.0.3", 00:12:13.687 "trsvcid": "4420" 00:12:13.687 }, 00:12:13.687 "peer_address": { 00:12:13.687 "trtype": "TCP", 00:12:13.687 "adrfam": "IPv4", 00:12:13.687 "traddr": "10.0.0.1", 00:12:13.687 "trsvcid": "41420" 00:12:13.687 }, 00:12:13.687 "auth": { 00:12:13.687 "state": "completed", 00:12:13.687 "digest": "sha512", 00:12:13.687 "dhgroup": "ffdhe8192" 00:12:13.687 } 00:12:13.687 } 00:12:13.687 ]' 00:12:13.687 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:13.687 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:13.687 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:13.687 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:13.687 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:13.687 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:13.687 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:13.687 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:13.945 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGUwNzE0YzY3OWYxM2FkYjkzMDc3Y2RkYWI0NTA1MzA1ODM5YTk5Mzg2N2I0NzVjpwCJUQ==: --dhchap-ctrl-secret DHHC-1:03:MTMxZmVlY2RhYTVjOWEwZDUwOTRjYTJmNzk3ZTdkNzJiM2I1M2QwMWFiNzc2MWJlZGYzMTJiYTliZTYwYTU0ZSb4+Oc=: 00:12:13.946 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:00:OGUwNzE0YzY3OWYxM2FkYjkzMDc3Y2RkYWI0NTA1MzA1ODM5YTk5Mzg2N2I0NzVjpwCJUQ==: --dhchap-ctrl-secret DHHC-1:03:MTMxZmVlY2RhYTVjOWEwZDUwOTRjYTJmNzk3ZTdkNzJiM2I1M2QwMWFiNzc2MWJlZGYzMTJiYTliZTYwYTU0ZSb4+Oc=: 00:12:14.882 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:14.882 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:14.882 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:12:14.882 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.882 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.882 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.882 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:14.882 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:14.882 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:14.882 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:12:14.882 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:14.882 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:14.882 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:14.882 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:14.882 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:14.882 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:14.883 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.883 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.883 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.883 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:14.883 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:14.883 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:15.818 00:12:15.818 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:15.818 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:15.818 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:15.818 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:15.818 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:15.818 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.818 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.818 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.818 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:15.818 { 00:12:15.818 "cntlid": 139, 00:12:15.818 "qid": 0, 00:12:15.818 "state": "enabled", 00:12:15.818 "thread": "nvmf_tgt_poll_group_000", 00:12:15.818 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:12:15.818 "listen_address": { 00:12:15.818 "trtype": "TCP", 00:12:15.818 "adrfam": "IPv4", 00:12:15.818 "traddr": "10.0.0.3", 00:12:15.818 "trsvcid": "4420" 00:12:15.818 }, 00:12:15.818 "peer_address": { 00:12:15.818 "trtype": "TCP", 00:12:15.818 "adrfam": "IPv4", 00:12:15.818 "traddr": "10.0.0.1", 00:12:15.818 "trsvcid": "41452" 00:12:15.818 }, 00:12:15.818 "auth": { 00:12:15.818 "state": "completed", 00:12:15.818 "digest": "sha512", 00:12:15.818 "dhgroup": "ffdhe8192" 00:12:15.818 } 00:12:15.818 } 00:12:15.818 ]' 00:12:15.818 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:16.076 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:16.076 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:16.076 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:16.076 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:16.076 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:16.076 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:16.077 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:16.335 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTg1MWIyY2JjZjdjNWE4ZTJlMGI0Mjk1YjcxZDMzYzFw9kV5: --dhchap-ctrl-secret DHHC-1:02:OWM0NGE2NDVkNzczZGUyNzgwZmQ3OTNhYTBkNmI2Mjg0ZjQzNjc1OTY5YzYwMmMyyPVorw==: 00:12:16.335 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:01:YTg1MWIyY2JjZjdjNWE4ZTJlMGI0Mjk1YjcxZDMzYzFw9kV5: --dhchap-ctrl-secret DHHC-1:02:OWM0NGE2NDVkNzczZGUyNzgwZmQ3OTNhYTBkNmI2Mjg0ZjQzNjc1OTY5YzYwMmMyyPVorw==: 00:12:16.902 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:16.902 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:16.902 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:12:16.902 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.902 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.902 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.902 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:16.902 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:16.902 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:17.469 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:12:17.469 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:17.469 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:17.469 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:17.469 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:17.469 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:17.469 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:17.469 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.469 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.469 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.469 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:17.469 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:17.469 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:18.036 00:12:18.036 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:18.036 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:18.036 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:18.294 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:18.294 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:18.294 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.294 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.294 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.294 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:18.294 { 00:12:18.294 "cntlid": 141, 00:12:18.294 "qid": 0, 00:12:18.294 "state": "enabled", 00:12:18.294 "thread": "nvmf_tgt_poll_group_000", 00:12:18.294 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:12:18.294 "listen_address": { 00:12:18.295 "trtype": "TCP", 00:12:18.295 "adrfam": "IPv4", 00:12:18.295 "traddr": "10.0.0.3", 00:12:18.295 "trsvcid": "4420" 00:12:18.295 }, 00:12:18.295 "peer_address": { 00:12:18.295 "trtype": "TCP", 00:12:18.295 "adrfam": "IPv4", 00:12:18.295 "traddr": "10.0.0.1", 00:12:18.295 "trsvcid": "55816" 00:12:18.295 }, 00:12:18.295 "auth": { 00:12:18.295 "state": "completed", 00:12:18.295 "digest": "sha512", 00:12:18.295 "dhgroup": "ffdhe8192" 00:12:18.295 } 00:12:18.295 } 00:12:18.295 ]' 00:12:18.295 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:18.295 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:18.295 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:18.295 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:18.295 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:18.552 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:18.552 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:18.552 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:18.834 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDlkMTBiMjNmMWJhMGI1MjNkMTE3YWE0ODU2MGJiNWUwODI0NzQ2MDJiMzU2YmNiHwH+NA==: --dhchap-ctrl-secret DHHC-1:01:MjUwY2E2OGIxZmNhNDI4Mzc5NmZmYTE4ODliMDA0ZDicMihN: 00:12:18.834 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:02:NDlkMTBiMjNmMWJhMGI1MjNkMTE3YWE0ODU2MGJiNWUwODI0NzQ2MDJiMzU2YmNiHwH+NA==: --dhchap-ctrl-secret DHHC-1:01:MjUwY2E2OGIxZmNhNDI4Mzc5NmZmYTE4ODliMDA0ZDicMihN: 00:12:19.406 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:19.406 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:19.406 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:12:19.406 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.406 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.406 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.406 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:19.406 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:19.406 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:19.664 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:12:19.664 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:19.664 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:19.664 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:19.664 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:19.664 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:19.664 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key3 00:12:19.664 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.664 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.664 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.664 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:19.664 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:19.664 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:20.600 00:12:20.600 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:20.600 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:20.600 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:20.600 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:20.600 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:20.600 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.600 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.600 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.600 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:20.600 { 00:12:20.600 "cntlid": 143, 00:12:20.600 "qid": 0, 00:12:20.600 "state": "enabled", 00:12:20.600 "thread": "nvmf_tgt_poll_group_000", 00:12:20.600 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:12:20.600 "listen_address": { 00:12:20.600 "trtype": "TCP", 00:12:20.600 "adrfam": "IPv4", 00:12:20.600 "traddr": "10.0.0.3", 00:12:20.600 "trsvcid": "4420" 00:12:20.600 }, 00:12:20.600 "peer_address": { 00:12:20.600 "trtype": "TCP", 00:12:20.600 "adrfam": "IPv4", 00:12:20.600 "traddr": "10.0.0.1", 00:12:20.600 "trsvcid": "55838" 00:12:20.600 }, 00:12:20.600 "auth": { 00:12:20.600 "state": "completed", 00:12:20.600 "digest": "sha512", 00:12:20.600 "dhgroup": "ffdhe8192" 00:12:20.600 } 00:12:20.600 } 00:12:20.600 ]' 00:12:20.600 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:20.859 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:20.859 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:20.859 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:20.859 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:20.859 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:20.859 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:20.859 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:21.426 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTQzZmI4ZTEzZDM4NTQyZGFmZjE2YmJkODZiMzBkYzRmZTIwMWEzNDRkMTYyYWY4ODE5ZjNiYzlhM2NiOTZhMRbjLRI=: 00:12:21.426 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:03:NTQzZmI4ZTEzZDM4NTQyZGFmZjE2YmJkODZiMzBkYzRmZTIwMWEzNDRkMTYyYWY4ODE5ZjNiYzlhM2NiOTZhMRbjLRI=: 00:12:21.994 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:21.994 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:21.994 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:12:21.994 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.994 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.994 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.994 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:12:21.994 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:12:21.994 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:12:21.994 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:21.994 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:21.994 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:22.253 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:12:22.253 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:22.253 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:22.253 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:22.253 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:22.253 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:22.253 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:22.253 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.253 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.253 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.253 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:22.253 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:22.253 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:22.920 00:12:22.921 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:22.921 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:22.921 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:23.179 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:23.179 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:23.179 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.179 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.179 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.179 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:23.179 { 00:12:23.179 "cntlid": 145, 00:12:23.179 "qid": 0, 00:12:23.180 "state": "enabled", 00:12:23.180 "thread": "nvmf_tgt_poll_group_000", 00:12:23.180 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:12:23.180 "listen_address": { 00:12:23.180 "trtype": "TCP", 00:12:23.180 "adrfam": "IPv4", 00:12:23.180 "traddr": "10.0.0.3", 00:12:23.180 "trsvcid": "4420" 00:12:23.180 }, 00:12:23.180 "peer_address": { 00:12:23.180 "trtype": "TCP", 00:12:23.180 "adrfam": "IPv4", 00:12:23.180 "traddr": "10.0.0.1", 00:12:23.180 "trsvcid": "55868" 00:12:23.180 }, 00:12:23.180 "auth": { 00:12:23.180 "state": "completed", 00:12:23.180 "digest": "sha512", 00:12:23.180 "dhgroup": "ffdhe8192" 00:12:23.180 } 00:12:23.180 } 00:12:23.180 ]' 00:12:23.180 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:23.180 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:23.180 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:23.180 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:23.180 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:23.438 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:23.438 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:23.438 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:23.696 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGUwNzE0YzY3OWYxM2FkYjkzMDc3Y2RkYWI0NTA1MzA1ODM5YTk5Mzg2N2I0NzVjpwCJUQ==: --dhchap-ctrl-secret DHHC-1:03:MTMxZmVlY2RhYTVjOWEwZDUwOTRjYTJmNzk3ZTdkNzJiM2I1M2QwMWFiNzc2MWJlZGYzMTJiYTliZTYwYTU0ZSb4+Oc=: 00:12:23.696 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:00:OGUwNzE0YzY3OWYxM2FkYjkzMDc3Y2RkYWI0NTA1MzA1ODM5YTk5Mzg2N2I0NzVjpwCJUQ==: --dhchap-ctrl-secret DHHC-1:03:MTMxZmVlY2RhYTVjOWEwZDUwOTRjYTJmNzk3ZTdkNzJiM2I1M2QwMWFiNzc2MWJlZGYzMTJiYTliZTYwYTU0ZSb4+Oc=: 00:12:24.262 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:24.262 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:24.262 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:12:24.262 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.262 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.262 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.262 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key1 00:12:24.262 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.262 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.262 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.262 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:12:24.262 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:24.262 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:12:24.262 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:12:24.262 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:24.262 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:12:24.262 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:24.262 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:12:24.262 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:12:24.262 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:12:24.827 request: 00:12:24.827 { 00:12:24.827 "name": "nvme0", 00:12:24.827 "trtype": "tcp", 00:12:24.827 "traddr": "10.0.0.3", 00:12:24.827 "adrfam": "ipv4", 00:12:24.827 "trsvcid": "4420", 00:12:24.827 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:24.827 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:12:24.827 "prchk_reftag": false, 00:12:24.827 "prchk_guard": false, 00:12:24.827 "hdgst": false, 00:12:24.827 "ddgst": false, 00:12:24.827 "dhchap_key": "key2", 00:12:24.827 "allow_unrecognized_csi": false, 00:12:24.827 "method": "bdev_nvme_attach_controller", 00:12:24.827 "req_id": 1 00:12:24.827 } 00:12:24.827 Got JSON-RPC error response 00:12:24.827 response: 00:12:24.827 { 00:12:24.827 "code": -5, 00:12:24.827 "message": "Input/output error" 00:12:24.827 } 00:12:25.085 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:25.085 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:25.085 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:25.085 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:25.085 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:12:25.085 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.085 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.085 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.085 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:25.085 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.085 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.085 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.085 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:25.085 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:25.085 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:25.085 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:12:25.085 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:25.085 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:12:25.085 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:25.085 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:25.085 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:25.085 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:25.649 request: 00:12:25.649 { 00:12:25.649 "name": "nvme0", 00:12:25.649 "trtype": "tcp", 00:12:25.649 "traddr": "10.0.0.3", 00:12:25.649 "adrfam": "ipv4", 00:12:25.649 "trsvcid": "4420", 00:12:25.649 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:25.649 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:12:25.649 "prchk_reftag": false, 00:12:25.649 "prchk_guard": false, 00:12:25.649 "hdgst": false, 00:12:25.649 "ddgst": false, 00:12:25.649 "dhchap_key": "key1", 00:12:25.649 "dhchap_ctrlr_key": "ckey2", 00:12:25.649 "allow_unrecognized_csi": false, 00:12:25.649 "method": "bdev_nvme_attach_controller", 00:12:25.649 "req_id": 1 00:12:25.649 } 00:12:25.650 Got JSON-RPC error response 00:12:25.650 response: 00:12:25.650 { 00:12:25.650 "code": -5, 00:12:25.650 "message": "Input/output error" 00:12:25.650 } 00:12:25.650 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:25.650 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:25.650 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:25.650 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:25.650 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:12:25.650 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.650 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.650 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.650 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key1 00:12:25.650 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.650 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.650 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.650 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:25.650 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:25.650 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:25.650 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:12:25.650 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:25.650 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:12:25.650 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:25.650 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:25.650 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:25.650 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:26.270 request: 00:12:26.270 { 00:12:26.270 "name": "nvme0", 00:12:26.270 "trtype": "tcp", 00:12:26.270 "traddr": "10.0.0.3", 00:12:26.270 "adrfam": "ipv4", 00:12:26.270 "trsvcid": "4420", 00:12:26.270 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:26.270 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:12:26.270 "prchk_reftag": false, 00:12:26.270 "prchk_guard": false, 00:12:26.270 "hdgst": false, 00:12:26.270 "ddgst": false, 00:12:26.270 "dhchap_key": "key1", 00:12:26.270 "dhchap_ctrlr_key": "ckey1", 00:12:26.270 "allow_unrecognized_csi": false, 00:12:26.270 "method": "bdev_nvme_attach_controller", 00:12:26.270 "req_id": 1 00:12:26.270 } 00:12:26.270 Got JSON-RPC error response 00:12:26.270 response: 00:12:26.270 { 00:12:26.270 "code": -5, 00:12:26.270 "message": "Input/output error" 00:12:26.270 } 00:12:26.270 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:26.270 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:26.270 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:26.270 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:26.270 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:12:26.270 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.270 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.270 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.270 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 67206 00:12:26.270 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 67206 ']' 00:12:26.270 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 67206 00:12:26.270 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:12:26.270 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:26.270 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67206 00:12:26.542 killing process with pid 67206 00:12:26.542 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:26.542 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:26.542 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67206' 00:12:26.542 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 67206 00:12:26.542 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 67206 00:12:26.542 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:12:26.542 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:26.542 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:26.542 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.542 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=70351 00:12:26.542 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:12:26.542 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 70351 00:12:26.542 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 70351 ']' 00:12:26.542 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.542 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:26.542 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.542 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:26.542 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.110 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:27.110 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:12:27.110 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:27.110 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:27.110 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.110 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:27.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:27.110 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:12:27.110 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 70351 00:12:27.110 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 70351 ']' 00:12:27.110 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:27.110 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:27.110 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:27.110 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:27.110 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.370 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:27.370 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:12:27.370 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:12:27.370 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.370 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.370 null0 00:12:27.370 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.370 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:12:27.370 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.vHx 00:12:27.370 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.370 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.370 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.370 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.AKl ]] 00:12:27.370 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.AKl 00:12:27.370 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.370 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.370 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.370 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:12:27.370 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.DNk 00:12:27.370 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.370 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.370 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.370 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.Wf5 ]] 00:12:27.370 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Wf5 00:12:27.370 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.370 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.629 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.629 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:12:27.629 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.UWJ 00:12:27.629 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.629 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.629 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.629 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.ygn ]] 00:12:27.629 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ygn 00:12:27.629 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.629 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.629 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.629 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:12:27.629 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.dvM 00:12:27.629 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.629 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.629 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.629 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:12:27.629 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:12:27.629 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:27.629 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:27.629 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:27.629 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:27.629 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:27.629 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key3 00:12:27.629 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.629 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.629 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.629 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:27.629 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:27.629 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:28.564 nvme0n1 00:12:28.564 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:28.564 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:28.564 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:28.823 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:28.823 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:28.823 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.823 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.823 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.823 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:28.823 { 00:12:28.823 "cntlid": 1, 00:12:28.823 "qid": 0, 00:12:28.823 "state": "enabled", 00:12:28.823 "thread": "nvmf_tgt_poll_group_000", 00:12:28.823 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:12:28.823 "listen_address": { 00:12:28.823 "trtype": "TCP", 00:12:28.823 "adrfam": "IPv4", 00:12:28.823 "traddr": "10.0.0.3", 00:12:28.823 "trsvcid": "4420" 00:12:28.823 }, 00:12:28.823 "peer_address": { 00:12:28.823 "trtype": "TCP", 00:12:28.823 "adrfam": "IPv4", 00:12:28.823 "traddr": "10.0.0.1", 00:12:28.823 "trsvcid": "49992" 00:12:28.823 }, 00:12:28.823 "auth": { 00:12:28.823 "state": "completed", 00:12:28.823 "digest": "sha512", 00:12:28.823 "dhgroup": "ffdhe8192" 00:12:28.823 } 00:12:28.823 } 00:12:28.823 ]' 00:12:28.823 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:28.823 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:28.823 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:29.081 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:29.081 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:29.081 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:29.081 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:29.081 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:29.340 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTQzZmI4ZTEzZDM4NTQyZGFmZjE2YmJkODZiMzBkYzRmZTIwMWEzNDRkMTYyYWY4ODE5ZjNiYzlhM2NiOTZhMRbjLRI=: 00:12:29.340 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:03:NTQzZmI4ZTEzZDM4NTQyZGFmZjE2YmJkODZiMzBkYzRmZTIwMWEzNDRkMTYyYWY4ODE5ZjNiYzlhM2NiOTZhMRbjLRI=: 00:12:30.275 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:30.275 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:30.275 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:12:30.275 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.275 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.275 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.275 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key3 00:12:30.275 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.275 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.275 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.275 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:12:30.275 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:12:30.534 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:12:30.534 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:30.534 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:12:30.534 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:12:30.534 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:30.534 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:12:30.534 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:30.534 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:30.534 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:30.534 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:30.793 request: 00:12:30.793 { 00:12:30.793 "name": "nvme0", 00:12:30.793 "trtype": "tcp", 00:12:30.793 "traddr": "10.0.0.3", 00:12:30.793 "adrfam": "ipv4", 00:12:30.793 "trsvcid": "4420", 00:12:30.793 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:30.793 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:12:30.793 "prchk_reftag": false, 00:12:30.793 "prchk_guard": false, 00:12:30.793 "hdgst": false, 00:12:30.793 "ddgst": false, 00:12:30.793 "dhchap_key": "key3", 00:12:30.793 "allow_unrecognized_csi": false, 00:12:30.793 "method": "bdev_nvme_attach_controller", 00:12:30.793 "req_id": 1 00:12:30.793 } 00:12:30.793 Got JSON-RPC error response 00:12:30.793 response: 00:12:30.793 { 00:12:30.793 "code": -5, 00:12:30.793 "message": "Input/output error" 00:12:30.793 } 00:12:30.793 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:30.793 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:30.793 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:30.793 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:30.793 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:12:30.793 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:12:30.793 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:12:30.793 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:12:31.139 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:12:31.139 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:31.139 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:12:31.139 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:12:31.139 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:31.139 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:12:31.139 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:31.139 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:31.139 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:31.139 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:31.397 request: 00:12:31.397 { 00:12:31.397 "name": "nvme0", 00:12:31.397 "trtype": "tcp", 00:12:31.397 "traddr": "10.0.0.3", 00:12:31.397 "adrfam": "ipv4", 00:12:31.397 "trsvcid": "4420", 00:12:31.397 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:31.397 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:12:31.397 "prchk_reftag": false, 00:12:31.397 "prchk_guard": false, 00:12:31.397 "hdgst": false, 00:12:31.397 "ddgst": false, 00:12:31.397 "dhchap_key": "key3", 00:12:31.397 "allow_unrecognized_csi": false, 00:12:31.397 "method": "bdev_nvme_attach_controller", 00:12:31.397 "req_id": 1 00:12:31.397 } 00:12:31.397 Got JSON-RPC error response 00:12:31.397 response: 00:12:31.397 { 00:12:31.397 "code": -5, 00:12:31.397 "message": "Input/output error" 00:12:31.397 } 00:12:31.397 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:31.397 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:31.397 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:31.397 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:31.397 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:12:31.397 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:12:31.397 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:12:31.397 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:31.397 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:31.397 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:31.656 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:12:31.656 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.656 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.656 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.656 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:12:31.656 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.656 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.656 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.656 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:31.656 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:31.656 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:31.656 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:12:31.656 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:31.656 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:12:31.656 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:31.656 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:31.656 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:31.656 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:32.222 request: 00:12:32.222 { 00:12:32.222 "name": "nvme0", 00:12:32.222 "trtype": "tcp", 00:12:32.222 "traddr": "10.0.0.3", 00:12:32.222 "adrfam": "ipv4", 00:12:32.222 "trsvcid": "4420", 00:12:32.222 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:32.222 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:12:32.222 "prchk_reftag": false, 00:12:32.222 "prchk_guard": false, 00:12:32.222 "hdgst": false, 00:12:32.222 "ddgst": false, 00:12:32.222 "dhchap_key": "key0", 00:12:32.222 "dhchap_ctrlr_key": "key1", 00:12:32.222 "allow_unrecognized_csi": false, 00:12:32.222 "method": "bdev_nvme_attach_controller", 00:12:32.222 "req_id": 1 00:12:32.222 } 00:12:32.222 Got JSON-RPC error response 00:12:32.222 response: 00:12:32.222 { 00:12:32.222 "code": -5, 00:12:32.222 "message": "Input/output error" 00:12:32.222 } 00:12:32.222 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:32.222 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:32.222 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:32.222 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:32.222 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:12:32.222 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:12:32.222 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:12:32.480 nvme0n1 00:12:32.480 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:12:32.480 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:12:32.480 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:32.740 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:32.740 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:32.740 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:33.307 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key1 00:12:33.307 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.307 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.307 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.307 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:12:33.307 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:12:33.307 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:12:34.242 nvme0n1 00:12:34.242 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:12:34.242 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:34.242 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:12:34.242 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:34.242 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:34.242 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.242 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.242 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.242 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:12:34.242 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:12:34.242 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:34.500 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:34.500 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NDlkMTBiMjNmMWJhMGI1MjNkMTE3YWE0ODU2MGJiNWUwODI0NzQ2MDJiMzU2YmNiHwH+NA==: --dhchap-ctrl-secret DHHC-1:03:NTQzZmI4ZTEzZDM4NTQyZGFmZjE2YmJkODZiMzBkYzRmZTIwMWEzNDRkMTYyYWY4ODE5ZjNiYzlhM2NiOTZhMRbjLRI=: 00:12:34.500 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid b4733420-cf17-49bc-adb6-f89fe6fa7a33 -l 0 --dhchap-secret DHHC-1:02:NDlkMTBiMjNmMWJhMGI1MjNkMTE3YWE0ODU2MGJiNWUwODI0NzQ2MDJiMzU2YmNiHwH+NA==: --dhchap-ctrl-secret DHHC-1:03:NTQzZmI4ZTEzZDM4NTQyZGFmZjE2YmJkODZiMzBkYzRmZTIwMWEzNDRkMTYyYWY4ODE5ZjNiYzlhM2NiOTZhMRbjLRI=: 00:12:35.438 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:12:35.438 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:12:35.438 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:12:35.438 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:12:35.438 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:12:35.438 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:12:35.438 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:12:35.438 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:35.438 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:35.696 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:12:35.696 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:35.696 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:12:35.696 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:12:35.696 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:35.696 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:12:35.697 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:35.697 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:12:35.697 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:12:35.697 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:12:36.291 request: 00:12:36.291 { 00:12:36.291 "name": "nvme0", 00:12:36.291 "trtype": "tcp", 00:12:36.291 "traddr": "10.0.0.3", 00:12:36.291 "adrfam": "ipv4", 00:12:36.291 "trsvcid": "4420", 00:12:36.291 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:36.291 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33", 00:12:36.291 "prchk_reftag": false, 00:12:36.291 "prchk_guard": false, 00:12:36.291 "hdgst": false, 00:12:36.291 "ddgst": false, 00:12:36.291 "dhchap_key": "key1", 00:12:36.291 "allow_unrecognized_csi": false, 00:12:36.291 "method": "bdev_nvme_attach_controller", 00:12:36.291 "req_id": 1 00:12:36.291 } 00:12:36.291 Got JSON-RPC error response 00:12:36.291 response: 00:12:36.291 { 00:12:36.291 "code": -5, 00:12:36.291 "message": "Input/output error" 00:12:36.291 } 00:12:36.291 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:36.291 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:36.291 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:36.291 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:36.292 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:36.292 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:36.292 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:37.228 nvme0n1 00:12:37.228 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:12:37.228 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:37.228 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:12:37.486 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:37.486 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:37.486 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:37.746 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:12:37.746 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.746 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.746 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.746 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:12:37.746 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:12:37.746 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:12:38.314 nvme0n1 00:12:38.314 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:12:38.314 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:12:38.314 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:38.572 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:38.572 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:38.572 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:38.831 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key1 --dhchap-ctrlr-key key3 00:12:38.831 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.831 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.831 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.831 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YTg1MWIyY2JjZjdjNWE4ZTJlMGI0Mjk1YjcxZDMzYzFw9kV5: '' 2s 00:12:38.831 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:12:38.831 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:12:38.831 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YTg1MWIyY2JjZjdjNWE4ZTJlMGI0Mjk1YjcxZDMzYzFw9kV5: 00:12:38.831 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:12:38.831 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:12:38.831 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:12:38.831 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YTg1MWIyY2JjZjdjNWE4ZTJlMGI0Mjk1YjcxZDMzYzFw9kV5: ]] 00:12:38.831 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YTg1MWIyY2JjZjdjNWE4ZTJlMGI0Mjk1YjcxZDMzYzFw9kV5: 00:12:38.831 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:12:38.831 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:12:38.831 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:12:40.773 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:12:40.773 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:12:40.773 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:12:40.773 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:12:40.773 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:12:40.773 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:12:40.773 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:12:40.773 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key1 --dhchap-ctrlr-key key2 00:12:40.773 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.773 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.773 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.773 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NDlkMTBiMjNmMWJhMGI1MjNkMTE3YWE0ODU2MGJiNWUwODI0NzQ2MDJiMzU2YmNiHwH+NA==: 2s 00:12:40.773 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:12:40.773 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:12:40.773 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:12:40.773 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NDlkMTBiMjNmMWJhMGI1MjNkMTE3YWE0ODU2MGJiNWUwODI0NzQ2MDJiMzU2YmNiHwH+NA==: 00:12:40.773 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:12:40.773 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:12:40.773 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:12:40.773 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NDlkMTBiMjNmMWJhMGI1MjNkMTE3YWE0ODU2MGJiNWUwODI0NzQ2MDJiMzU2YmNiHwH+NA==: ]] 00:12:40.773 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NDlkMTBiMjNmMWJhMGI1MjNkMTE3YWE0ODU2MGJiNWUwODI0NzQ2MDJiMzU2YmNiHwH+NA==: 00:12:40.773 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:12:40.773 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:12:43.304 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:12:43.304 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:12:43.304 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:12:43.304 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:12:43.304 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:12:43.304 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:12:43.304 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:12:43.304 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:43.304 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:43.304 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:43.304 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.304 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.304 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.304 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:43.304 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:43.304 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:43.869 nvme0n1 00:12:44.126 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:44.126 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.126 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.126 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.126 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:44.126 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:44.691 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:12:44.691 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:44.691 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:12:44.948 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:44.948 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:12:44.948 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.949 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.949 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.949 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:12:44.949 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:12:45.207 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:12:45.207 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:12:45.207 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:45.465 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:45.465 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:45.465 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.465 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.465 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.465 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:12:45.465 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:45.465 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:12:45.465 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:12:45.465 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:45.465 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:12:45.465 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:45.465 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:12:45.466 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:12:46.400 request: 00:12:46.400 { 00:12:46.400 "name": "nvme0", 00:12:46.400 "dhchap_key": "key1", 00:12:46.400 "dhchap_ctrlr_key": "key3", 00:12:46.400 "method": "bdev_nvme_set_keys", 00:12:46.400 "req_id": 1 00:12:46.400 } 00:12:46.400 Got JSON-RPC error response 00:12:46.400 response: 00:12:46.400 { 00:12:46.400 "code": -13, 00:12:46.400 "message": "Permission denied" 00:12:46.400 } 00:12:46.400 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:46.400 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:46.400 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:46.400 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:46.400 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:12:46.400 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:46.400 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:12:46.660 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:12:46.660 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:12:47.594 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:12:47.594 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:47.594 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:12:47.853 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:12:47.853 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:47.853 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.853 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.853 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.853 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:47.853 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:47.853 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:48.791 nvme0n1 00:12:49.050 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:49.050 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.050 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.050 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.050 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:12:49.050 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:49.050 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:12:49.050 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:12:49.050 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:49.050 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:12:49.050 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:49.050 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:12:49.050 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:12:49.618 request: 00:12:49.618 { 00:12:49.618 "name": "nvme0", 00:12:49.618 "dhchap_key": "key2", 00:12:49.618 "dhchap_ctrlr_key": "key0", 00:12:49.618 "method": "bdev_nvme_set_keys", 00:12:49.618 "req_id": 1 00:12:49.618 } 00:12:49.618 Got JSON-RPC error response 00:12:49.618 response: 00:12:49.618 { 00:12:49.618 "code": -13, 00:12:49.618 "message": "Permission denied" 00:12:49.618 } 00:12:49.618 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:49.618 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:49.618 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:49.618 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:49.618 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:12:49.618 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:49.618 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:12:49.876 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:12:49.876 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:12:50.811 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:12:50.811 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:12:50.811 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:51.396 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:12:51.396 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:12:51.396 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:12:51.396 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 67238 00:12:51.396 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 67238 ']' 00:12:51.396 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 67238 00:12:51.396 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:12:51.396 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:51.396 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67238 00:12:51.396 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:12:51.396 killing process with pid 67238 00:12:51.396 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:12:51.396 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67238' 00:12:51.397 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 67238 00:12:51.397 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 67238 00:12:51.655 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:12:51.655 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:51.655 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:12:51.655 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:51.655 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:12:51.655 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:51.655 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:51.655 rmmod nvme_tcp 00:12:51.655 rmmod nvme_fabrics 00:12:51.655 rmmod nvme_keyring 00:12:51.655 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:51.655 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:12:51.655 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:12:51.655 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 70351 ']' 00:12:51.655 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 70351 00:12:51.655 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 70351 ']' 00:12:51.655 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 70351 00:12:51.655 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:12:51.655 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:51.655 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70351 00:12:51.655 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:51.655 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:51.655 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70351' 00:12:51.655 killing process with pid 70351 00:12:51.655 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 70351 00:12:51.655 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 70351 00:12:51.912 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:51.912 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:51.912 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:51.912 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:12:51.912 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:51.912 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:12:51.912 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:12:52.170 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:52.170 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:52.170 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:52.170 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:52.170 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:52.170 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:52.170 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:52.170 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:52.170 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:52.170 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:52.170 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:52.170 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:52.170 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:52.170 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:52.170 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:52.170 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:52.171 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:52.171 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:52.171 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:52.171 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:12:52.171 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.vHx /tmp/spdk.key-sha256.DNk /tmp/spdk.key-sha384.UWJ /tmp/spdk.key-sha512.dvM /tmp/spdk.key-sha512.AKl /tmp/spdk.key-sha384.Wf5 /tmp/spdk.key-sha256.ygn '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:12:52.171 00:12:52.171 real 3m18.452s 00:12:52.171 user 7m54.957s 00:12:52.171 sys 0m30.770s 00:12:52.171 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:52.171 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.171 ************************************ 00:12:52.171 END TEST nvmf_auth_target 00:12:52.171 ************************************ 00:12:52.429 10:30:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:12:52.429 10:30:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:12:52.429 10:30:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:12:52.429 10:30:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:52.429 10:30:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:52.429 ************************************ 00:12:52.429 START TEST nvmf_bdevio_no_huge 00:12:52.429 ************************************ 00:12:52.429 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:12:52.429 * Looking for test storage... 00:12:52.429 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:52.429 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:52.429 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lcov --version 00:12:52.429 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:52.429 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:52.429 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:52.429 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:52.429 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:52.429 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:12:52.429 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:12:52.429 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:12:52.429 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:12:52.429 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:12:52.429 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:12:52.429 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:12:52.429 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:52.429 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:12:52.429 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:12:52.429 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:52.429 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:52.429 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:12:52.429 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:12:52.429 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:52.429 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:12:52.429 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:12:52.430 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:12:52.430 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:12:52.430 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:52.430 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:12:52.430 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:12:52.430 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:52.430 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:52.430 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:12:52.430 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:52.430 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:52.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.430 --rc genhtml_branch_coverage=1 00:12:52.430 --rc genhtml_function_coverage=1 00:12:52.430 --rc genhtml_legend=1 00:12:52.430 --rc geninfo_all_blocks=1 00:12:52.430 --rc geninfo_unexecuted_blocks=1 00:12:52.430 00:12:52.430 ' 00:12:52.430 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:52.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.430 --rc genhtml_branch_coverage=1 00:12:52.430 --rc genhtml_function_coverage=1 00:12:52.430 --rc genhtml_legend=1 00:12:52.430 --rc geninfo_all_blocks=1 00:12:52.430 --rc geninfo_unexecuted_blocks=1 00:12:52.430 00:12:52.430 ' 00:12:52.430 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:52.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.430 --rc genhtml_branch_coverage=1 00:12:52.430 --rc genhtml_function_coverage=1 00:12:52.430 --rc genhtml_legend=1 00:12:52.430 --rc geninfo_all_blocks=1 00:12:52.430 --rc geninfo_unexecuted_blocks=1 00:12:52.430 00:12:52.430 ' 00:12:52.430 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:52.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.430 --rc genhtml_branch_coverage=1 00:12:52.430 --rc genhtml_function_coverage=1 00:12:52.430 --rc genhtml_legend=1 00:12:52.430 --rc geninfo_all_blocks=1 00:12:52.430 --rc geninfo_unexecuted_blocks=1 00:12:52.430 00:12:52.430 ' 00:12:52.430 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:52.430 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:12:52.430 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:52.430 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:52.430 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:52.430 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:52.430 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:52.430 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:52.430 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:52.430 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:52.430 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:52.430 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:52.430 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:12:52.430 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:12:52.430 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:52.430 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:52.430 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:52.430 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:52.430 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:52.430 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:12:52.430 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:52.430 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:52.430 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:52.430 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.430 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.430 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.430 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:12:52.430 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.430 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:12:52.430 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:52.430 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:52.430 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:52.430 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:52.430 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:52.430 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:52.430 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:52.430 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:52.430 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:52.430 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:52.430 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:52.690 Cannot find device "nvmf_init_br" 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:52.690 Cannot find device "nvmf_init_br2" 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:52.690 Cannot find device "nvmf_tgt_br" 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:52.690 Cannot find device "nvmf_tgt_br2" 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:52.690 Cannot find device "nvmf_init_br" 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:52.690 Cannot find device "nvmf_init_br2" 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:52.690 Cannot find device "nvmf_tgt_br" 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:52.690 Cannot find device "nvmf_tgt_br2" 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:52.690 Cannot find device "nvmf_br" 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:52.690 Cannot find device "nvmf_init_if" 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:52.690 Cannot find device "nvmf_init_if2" 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:52.690 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:52.690 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:52.690 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:52.691 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:52.691 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:52.949 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:52.949 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:52.949 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:52.949 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:52.949 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:52.949 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:52.949 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:52.949 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:52.949 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:52.949 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:52.949 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:52.949 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:52.949 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:52.949 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:52.949 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.105 ms 00:12:52.949 00:12:52.949 --- 10.0.0.3 ping statistics --- 00:12:52.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.949 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:12:52.949 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:52.949 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:52.949 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:12:52.949 00:12:52.949 --- 10.0.0.4 ping statistics --- 00:12:52.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.949 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:12:52.949 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:52.949 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:52.949 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:12:52.949 00:12:52.949 --- 10.0.0.1 ping statistics --- 00:12:52.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.949 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:12:52.949 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:52.949 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:52.949 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:12:52.949 00:12:52.949 --- 10.0.0.2 ping statistics --- 00:12:52.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.949 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:12:52.949 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:52.949 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:12:52.949 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:52.949 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:52.949 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:52.949 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:52.949 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:52.950 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:52.950 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:52.950 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:52.950 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:52.950 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:52.950 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:52.950 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=71002 00:12:52.950 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:12:52.950 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 71002 00:12:52.950 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # '[' -z 71002 ']' 00:12:52.950 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:52.950 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:52.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:52.950 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:52.950 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:52.950 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:52.950 [2024-11-15 10:30:53.740729] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:12:52.950 [2024-11-15 10:30:53.740838] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:12:53.208 [2024-11-15 10:30:53.903157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:53.208 [2024-11-15 10:30:53.978925] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:53.208 [2024-11-15 10:30:53.978993] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:53.208 [2024-11-15 10:30:53.979006] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:53.208 [2024-11-15 10:30:53.979014] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:53.208 [2024-11-15 10:30:53.979021] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:53.208 [2024-11-15 10:30:53.979944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:12:53.208 [2024-11-15 10:30:53.980028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:12:53.208 [2024-11-15 10:30:53.980094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:53.208 [2024-11-15 10:30:53.980094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:12:53.208 [2024-11-15 10:30:53.985225] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:54.142 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:54.142 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@866 -- # return 0 00:12:54.142 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:54.142 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:54.142 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:54.142 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:54.142 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:54.142 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.142 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:54.142 [2024-11-15 10:30:54.770216] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:54.142 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.142 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:54.142 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.142 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:54.142 Malloc0 00:12:54.142 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.142 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:54.142 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.142 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:54.142 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.142 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:54.142 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.142 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:54.142 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.142 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:54.142 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.142 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:54.142 [2024-11-15 10:30:54.810382] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:54.142 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.142 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:12:54.142 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:54.142 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:12:54.142 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:12:54.142 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:54.142 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:54.142 { 00:12:54.142 "params": { 00:12:54.142 "name": "Nvme$subsystem", 00:12:54.142 "trtype": "$TEST_TRANSPORT", 00:12:54.142 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:54.142 "adrfam": "ipv4", 00:12:54.142 "trsvcid": "$NVMF_PORT", 00:12:54.142 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:54.142 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:54.142 "hdgst": ${hdgst:-false}, 00:12:54.142 "ddgst": ${ddgst:-false} 00:12:54.142 }, 00:12:54.142 "method": "bdev_nvme_attach_controller" 00:12:54.142 } 00:12:54.142 EOF 00:12:54.142 )") 00:12:54.142 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:12:54.142 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:12:54.142 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:12:54.142 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:54.142 "params": { 00:12:54.142 "name": "Nvme1", 00:12:54.142 "trtype": "tcp", 00:12:54.142 "traddr": "10.0.0.3", 00:12:54.142 "adrfam": "ipv4", 00:12:54.142 "trsvcid": "4420", 00:12:54.142 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:54.142 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:54.142 "hdgst": false, 00:12:54.142 "ddgst": false 00:12:54.142 }, 00:12:54.142 "method": "bdev_nvme_attach_controller" 00:12:54.142 }' 00:12:54.142 [2024-11-15 10:30:54.864669] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:12:54.142 [2024-11-15 10:30:54.864747] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid71038 ] 00:12:54.401 [2024-11-15 10:30:55.015480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:54.401 [2024-11-15 10:30:55.082432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:54.401 [2024-11-15 10:30:55.082538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:54.401 [2024-11-15 10:30:55.082542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.401 [2024-11-15 10:30:55.096090] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:54.659 I/O targets: 00:12:54.659 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:54.659 00:12:54.659 00:12:54.659 CUnit - A unit testing framework for C - Version 2.1-3 00:12:54.659 http://cunit.sourceforge.net/ 00:12:54.659 00:12:54.659 00:12:54.659 Suite: bdevio tests on: Nvme1n1 00:12:54.659 Test: blockdev write read block ...passed 00:12:54.659 Test: blockdev write zeroes read block ...passed 00:12:54.659 Test: blockdev write zeroes read no split ...passed 00:12:54.659 Test: blockdev write zeroes read split ...passed 00:12:54.659 Test: blockdev write zeroes read split partial ...passed 00:12:54.659 Test: blockdev reset ...[2024-11-15 10:30:55.328723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:12:54.659 [2024-11-15 10:30:55.328844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2323310 (9): Bad file descriptor 00:12:54.659 [2024-11-15 10:30:55.342330] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:12:54.659 passed 00:12:54.659 Test: blockdev write read 8 blocks ...passed 00:12:54.659 Test: blockdev write read size > 128k ...passed 00:12:54.659 Test: blockdev write read invalid size ...passed 00:12:54.659 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:54.659 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:54.659 Test: blockdev write read max offset ...passed 00:12:54.659 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:54.659 Test: blockdev writev readv 8 blocks ...passed 00:12:54.659 Test: blockdev writev readv 30 x 1block ...passed 00:12:54.659 Test: blockdev writev readv block ...passed 00:12:54.659 Test: blockdev writev readv size > 128k ...passed 00:12:54.659 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:54.659 Test: blockdev comparev and writev ...[2024-11-15 10:30:55.352378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:54.659 [2024-11-15 10:30:55.352578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:54.659 [2024-11-15 10:30:55.352833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:54.659 [2024-11-15 10:30:55.352984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:54.659 [2024-11-15 10:30:55.353423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:54.659 [2024-11-15 10:30:55.353459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:54.659 [2024-11-15 10:30:55.353483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:54.659 [2024-11-15 10:30:55.353496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:54.659 [2024-11-15 10:30:55.353799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:54.659 [2024-11-15 10:30:55.353819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:54.659 [2024-11-15 10:30:55.353838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:54.659 [2024-11-15 10:30:55.353850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:54.659 [2024-11-15 10:30:55.354155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:54.660 [2024-11-15 10:30:55.354183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:54.660 [2024-11-15 10:30:55.354204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:54.660 [2024-11-15 10:30:55.354216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:54.660 passed 00:12:54.660 Test: blockdev nvme passthru rw ...passed 00:12:54.660 Test: blockdev nvme passthru vendor specific ...[2024-11-15 10:30:55.355014] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:54.660 passed 00:12:54.660 Test: blockdev nvme admin passthru ...[2024-11-15 10:30:55.355042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:54.660 [2024-11-15 10:30:55.355178] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:54.660 [2024-11-15 10:30:55.355199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:54.660 [2024-11-15 10:30:55.355310] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:54.660 [2024-11-15 10:30:55.355328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:54.660 [2024-11-15 10:30:55.355440] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:54.660 [2024-11-15 10:30:55.355458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:54.660 passed 00:12:54.660 Test: blockdev copy ...passed 00:12:54.660 00:12:54.660 Run Summary: Type Total Ran Passed Failed Inactive 00:12:54.660 suites 1 1 n/a 0 0 00:12:54.660 tests 23 23 23 0 0 00:12:54.660 asserts 152 152 152 0 n/a 00:12:54.660 00:12:54.660 Elapsed time = 0.172 seconds 00:12:54.918 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:54.918 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.918 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:54.918 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.918 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:54.918 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:12:54.918 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:54.918 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:12:54.918 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:54.918 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:12:54.918 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:54.918 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:55.176 rmmod nvme_tcp 00:12:55.177 rmmod nvme_fabrics 00:12:55.177 rmmod nvme_keyring 00:12:55.177 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:55.177 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:12:55.177 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:12:55.177 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 71002 ']' 00:12:55.177 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 71002 00:12:55.177 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # '[' -z 71002 ']' 00:12:55.177 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # kill -0 71002 00:12:55.177 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # uname 00:12:55.177 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:55.177 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71002 00:12:55.177 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:12:55.177 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:12:55.177 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71002' 00:12:55.177 killing process with pid 71002 00:12:55.177 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@971 -- # kill 71002 00:12:55.177 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@976 -- # wait 71002 00:12:55.434 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:55.434 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:55.434 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:55.434 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:12:55.434 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:12:55.434 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:55.434 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:12:55.434 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:55.434 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:55.434 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:55.434 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:55.692 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:55.692 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:55.692 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:55.692 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:55.692 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:55.692 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:55.692 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:55.692 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:55.693 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:55.693 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:55.693 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:55.693 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:55.693 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:55.693 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:55.693 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:55.693 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:12:55.693 00:12:55.693 real 0m3.434s 00:12:55.693 user 0m10.331s 00:12:55.693 sys 0m1.387s 00:12:55.693 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:55.693 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:55.693 ************************************ 00:12:55.693 END TEST nvmf_bdevio_no_huge 00:12:55.693 ************************************ 00:12:55.693 10:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:12:55.693 10:30:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:55.693 10:30:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:55.693 10:30:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:55.952 ************************************ 00:12:55.952 START TEST nvmf_tls 00:12:55.952 ************************************ 00:12:55.952 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:12:55.952 * Looking for test storage... 00:12:55.952 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:55.952 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:55.952 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lcov --version 00:12:55.952 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:55.952 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:55.952 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:55.952 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:55.952 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:55.952 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:12:55.952 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:12:55.952 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:12:55.952 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:12:55.952 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:12:55.952 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:12:55.952 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:12:55.952 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:55.952 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:12:55.952 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:12:55.952 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:55.952 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:55.952 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:12:55.952 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:12:55.952 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:55.952 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:12:55.952 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:12:55.952 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:12:55.952 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:12:55.952 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:55.952 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:12:55.952 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:12:55.952 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:55.952 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:55.952 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:12:55.952 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:55.952 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:55.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.952 --rc genhtml_branch_coverage=1 00:12:55.952 --rc genhtml_function_coverage=1 00:12:55.952 --rc genhtml_legend=1 00:12:55.952 --rc geninfo_all_blocks=1 00:12:55.952 --rc geninfo_unexecuted_blocks=1 00:12:55.952 00:12:55.952 ' 00:12:55.952 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:55.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.952 --rc genhtml_branch_coverage=1 00:12:55.952 --rc genhtml_function_coverage=1 00:12:55.952 --rc genhtml_legend=1 00:12:55.952 --rc geninfo_all_blocks=1 00:12:55.952 --rc geninfo_unexecuted_blocks=1 00:12:55.952 00:12:55.952 ' 00:12:55.952 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:55.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.952 --rc genhtml_branch_coverage=1 00:12:55.952 --rc genhtml_function_coverage=1 00:12:55.952 --rc genhtml_legend=1 00:12:55.952 --rc geninfo_all_blocks=1 00:12:55.952 --rc geninfo_unexecuted_blocks=1 00:12:55.952 00:12:55.952 ' 00:12:55.952 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:55.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.952 --rc genhtml_branch_coverage=1 00:12:55.952 --rc genhtml_function_coverage=1 00:12:55.952 --rc genhtml_legend=1 00:12:55.952 --rc geninfo_all_blocks=1 00:12:55.952 --rc geninfo_unexecuted_blocks=1 00:12:55.952 00:12:55.952 ' 00:12:55.952 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:55.952 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:12:55.952 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:55.952 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:55.952 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:55.952 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:55.952 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:55.952 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:55.952 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:55.952 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:55.952 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:55.952 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:55.952 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:12:55.952 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:12:55.952 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:55.952 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:55.952 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:55.952 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:55.952 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:55.952 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:12:55.952 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:55.952 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:55.952 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:55.952 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.953 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.953 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.953 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:12:55.953 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.953 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:12:55.953 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:55.953 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:55.953 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:55.953 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:55.953 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:55.953 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:55.953 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:55.953 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:55.953 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:55.953 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:55.953 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:55.953 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:12:55.953 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:55.953 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:55.953 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:55.953 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:55.953 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:55.953 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:55.953 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:55.953 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:55.953 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:55.953 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:55.953 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:55.953 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:55.953 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:55.953 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:55.953 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:55.953 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:55.953 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:55.953 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:55.953 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:55.953 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:55.953 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:55.953 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:55.953 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:55.953 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:55.953 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:55.953 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:55.953 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:55.953 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:55.953 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:55.953 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:55.953 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:55.953 Cannot find device "nvmf_init_br" 00:12:55.953 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:12:55.953 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:55.953 Cannot find device "nvmf_init_br2" 00:12:55.953 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:12:55.953 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:55.953 Cannot find device "nvmf_tgt_br" 00:12:55.953 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:12:55.953 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:56.211 Cannot find device "nvmf_tgt_br2" 00:12:56.211 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:12:56.211 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:56.211 Cannot find device "nvmf_init_br" 00:12:56.211 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:12:56.211 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:56.211 Cannot find device "nvmf_init_br2" 00:12:56.211 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:12:56.211 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:56.211 Cannot find device "nvmf_tgt_br" 00:12:56.211 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:12:56.211 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:56.211 Cannot find device "nvmf_tgt_br2" 00:12:56.211 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:12:56.211 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:56.211 Cannot find device "nvmf_br" 00:12:56.211 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:12:56.211 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:56.211 Cannot find device "nvmf_init_if" 00:12:56.211 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:12:56.211 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:56.211 Cannot find device "nvmf_init_if2" 00:12:56.211 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:12:56.211 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:56.211 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:56.211 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:12:56.211 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:56.211 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:56.211 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:12:56.211 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:56.211 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:56.211 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:56.211 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:56.211 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:56.211 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:56.211 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:56.211 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:56.211 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:56.211 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:56.211 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:56.212 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:56.212 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:56.212 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:56.212 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:56.212 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:56.212 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:56.212 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:56.212 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:56.212 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:56.471 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:56.471 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:56.471 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:56.471 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:56.471 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:56.471 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:56.471 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:56.471 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:56.471 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:56.471 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:56.471 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:56.471 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:56.471 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:56.471 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:56.471 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:12:56.471 00:12:56.471 --- 10.0.0.3 ping statistics --- 00:12:56.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.471 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:12:56.471 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:56.471 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:56.471 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:12:56.471 00:12:56.471 --- 10.0.0.4 ping statistics --- 00:12:56.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.471 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:12:56.471 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:56.471 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:56.471 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:12:56.471 00:12:56.471 --- 10.0.0.1 ping statistics --- 00:12:56.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.471 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:12:56.471 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:56.471 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:56.471 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:12:56.471 00:12:56.471 --- 10.0.0.2 ping statistics --- 00:12:56.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.471 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:12:56.471 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:56.471 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:12:56.471 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:56.471 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:56.471 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:56.471 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:56.471 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:56.471 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:56.471 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:56.471 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:12:56.471 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:56.471 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:56.471 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:56.471 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71269 00:12:56.471 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:12:56.471 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71269 00:12:56.471 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71269 ']' 00:12:56.471 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:56.471 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:56.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:56.472 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:56.472 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:56.472 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:56.472 [2024-11-15 10:30:57.253506] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:12:56.472 [2024-11-15 10:30:57.253622] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:56.730 [2024-11-15 10:30:57.411503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:56.730 [2024-11-15 10:30:57.490223] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:56.730 [2024-11-15 10:30:57.490297] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:56.730 [2024-11-15 10:30:57.490314] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:56.730 [2024-11-15 10:30:57.490332] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:56.730 [2024-11-15 10:30:57.490347] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:56.730 [2024-11-15 10:30:57.490830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:57.665 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:57.665 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:12:57.665 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:57.665 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:57.665 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:57.665 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:57.665 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:12:57.665 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:12:57.922 true 00:12:57.922 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:57.922 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:12:58.488 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:12:58.488 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:12:58.488 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:12:58.784 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:58.784 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:12:59.042 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:12:59.042 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:12:59.042 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:12:59.301 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:59.301 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:12:59.560 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:12:59.561 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:12:59.561 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:59.561 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:12:59.820 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:12:59.820 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:12:59.820 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:13:00.081 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:00.081 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:13:00.338 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:13:00.338 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:13:00.338 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:13:00.596 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:13:00.596 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:01.162 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:13:01.162 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:13:01.162 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:13:01.162 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:13:01.162 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:13:01.162 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:13:01.162 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:13:01.162 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:13:01.162 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:13:01.162 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:01.162 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:13:01.162 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:13:01.162 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:13:01.162 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:13:01.162 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:13:01.162 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:13:01.162 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:13:01.162 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:01.162 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:13:01.162 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.TARIjO0WbZ 00:13:01.162 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:13:01.162 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.M1qiX3EiId 00:13:01.162 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:01.162 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:01.162 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.TARIjO0WbZ 00:13:01.162 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.M1qiX3EiId 00:13:01.162 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:01.421 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:13:01.681 [2024-11-15 10:31:02.488961] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:01.951 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.TARIjO0WbZ 00:13:01.951 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.TARIjO0WbZ 00:13:01.951 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:02.209 [2024-11-15 10:31:02.922754] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:02.209 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:02.467 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:13:02.725 [2024-11-15 10:31:03.558986] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:02.725 [2024-11-15 10:31:03.559262] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:02.983 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:03.241 malloc0 00:13:03.241 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:03.500 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.TARIjO0WbZ 00:13:03.759 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:13:04.017 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.TARIjO0WbZ 00:13:16.303 Initializing NVMe Controllers 00:13:16.303 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:13:16.303 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:16.303 Initialization complete. Launching workers. 00:13:16.303 ======================================================== 00:13:16.303 Latency(us) 00:13:16.303 Device Information : IOPS MiB/s Average min max 00:13:16.303 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8021.39 31.33 7981.01 1665.35 14212.06 00:13:16.303 ======================================================== 00:13:16.303 Total : 8021.39 31.33 7981.01 1665.35 14212.06 00:13:16.303 00:13:16.303 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.TARIjO0WbZ 00:13:16.303 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:16.303 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:16.303 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:16.303 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.TARIjO0WbZ 00:13:16.303 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:16.303 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71519 00:13:16.303 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:16.303 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:16.303 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71519 /var/tmp/bdevperf.sock 00:13:16.303 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71519 ']' 00:13:16.303 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:16.303 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:16.303 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:16.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:16.303 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:16.303 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:16.303 [2024-11-15 10:31:15.067523] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:13:16.303 [2024-11-15 10:31:15.067826] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71519 ] 00:13:16.303 [2024-11-15 10:31:15.218119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.303 [2024-11-15 10:31:15.287933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:16.303 [2024-11-15 10:31:15.346200] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:16.303 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:16.303 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:13:16.303 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.TARIjO0WbZ 00:13:16.303 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:16.303 [2024-11-15 10:31:15.949653] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:16.303 TLSTESTn1 00:13:16.303 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:16.303 Running I/O for 10 seconds... 00:13:17.515 3328.00 IOPS, 13.00 MiB/s [2024-11-15T10:31:19.303Z] 3386.50 IOPS, 13.23 MiB/s [2024-11-15T10:31:20.239Z] 3392.00 IOPS, 13.25 MiB/s [2024-11-15T10:31:21.174Z] 3387.00 IOPS, 13.23 MiB/s [2024-11-15T10:31:22.548Z] 3379.20 IOPS, 13.20 MiB/s [2024-11-15T10:31:23.529Z] 3392.00 IOPS, 13.25 MiB/s [2024-11-15T10:31:24.465Z] 3389.14 IOPS, 13.24 MiB/s [2024-11-15T10:31:25.399Z] 3391.00 IOPS, 13.25 MiB/s [2024-11-15T10:31:26.334Z] 3384.89 IOPS, 13.22 MiB/s [2024-11-15T10:31:26.334Z] 3366.40 IOPS, 13.15 MiB/s 00:13:25.481 Latency(us) 00:13:25.481 [2024-11-15T10:31:26.335Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:25.482 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:25.482 Verification LBA range: start 0x0 length 0x2000 00:13:25.482 TLSTESTn1 : 10.03 3368.97 13.16 0.00 0.00 37915.88 8281.37 35031.97 00:13:25.482 [2024-11-15T10:31:26.335Z] =================================================================================================================== 00:13:25.482 [2024-11-15T10:31:26.335Z] Total : 3368.97 13.16 0.00 0.00 37915.88 8281.37 35031.97 00:13:25.482 { 00:13:25.482 "results": [ 00:13:25.482 { 00:13:25.482 "job": "TLSTESTn1", 00:13:25.482 "core_mask": "0x4", 00:13:25.482 "workload": "verify", 00:13:25.482 "status": "finished", 00:13:25.482 "verify_range": { 00:13:25.482 "start": 0, 00:13:25.482 "length": 8192 00:13:25.482 }, 00:13:25.482 "queue_depth": 128, 00:13:25.482 "io_size": 4096, 00:13:25.482 "runtime": 10.030367, 00:13:25.482 "iops": 3368.9694504697586, 00:13:25.482 "mibps": 13.160036915897495, 00:13:25.482 "io_failed": 0, 00:13:25.482 "io_timeout": 0, 00:13:25.482 "avg_latency_us": 37915.877906336085, 00:13:25.482 "min_latency_us": 8281.367272727273, 00:13:25.482 "max_latency_us": 35031.97090909091 00:13:25.482 } 00:13:25.482 ], 00:13:25.482 "core_count": 1 00:13:25.482 } 00:13:25.482 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:25.482 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71519 00:13:25.482 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71519 ']' 00:13:25.482 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71519 00:13:25.482 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:13:25.482 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:25.482 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71519 00:13:25.482 killing process with pid 71519 00:13:25.482 Received shutdown signal, test time was about 10.000000 seconds 00:13:25.482 00:13:25.482 Latency(us) 00:13:25.482 [2024-11-15T10:31:26.335Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:25.482 [2024-11-15T10:31:26.335Z] =================================================================================================================== 00:13:25.482 [2024-11-15T10:31:26.335Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:25.482 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:13:25.482 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:13:25.482 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71519' 00:13:25.482 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71519 00:13:25.482 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71519 00:13:25.741 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.M1qiX3EiId 00:13:25.741 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:13:25.741 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.M1qiX3EiId 00:13:25.741 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:13:25.741 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:25.741 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:13:25.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:25.741 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:25.741 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.M1qiX3EiId 00:13:25.741 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:25.741 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:25.741 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:25.741 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.M1qiX3EiId 00:13:25.741 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:25.741 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71646 00:13:25.741 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:25.741 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71646 /var/tmp/bdevperf.sock 00:13:25.741 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71646 ']' 00:13:25.741 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:25.741 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:25.741 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:25.741 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:25.741 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:25.741 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:25.741 [2024-11-15 10:31:26.526611] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:13:25.741 [2024-11-15 10:31:26.526915] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71646 ] 00:13:25.999 [2024-11-15 10:31:26.675921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:25.999 [2024-11-15 10:31:26.733610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:25.999 [2024-11-15 10:31:26.787418] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:26.258 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:26.258 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:13:26.258 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.M1qiX3EiId 00:13:26.516 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:26.774 [2024-11-15 10:31:27.471853] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:26.774 [2024-11-15 10:31:27.479504] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:26.774 [2024-11-15 10:31:27.479851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xff6fb0 (107): Transport endpoint is not connected 00:13:26.774 [2024-11-15 10:31:27.480840] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xff6fb0 (9): Bad file descriptor 00:13:26.774 [2024-11-15 10:31:27.481836] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:13:26.774 [2024-11-15 10:31:27.481866] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:13:26.774 [2024-11-15 10:31:27.481878] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:13:26.774 [2024-11-15 10:31:27.481894] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:13:26.774 request: 00:13:26.774 { 00:13:26.774 "name": "TLSTEST", 00:13:26.774 "trtype": "tcp", 00:13:26.774 "traddr": "10.0.0.3", 00:13:26.774 "adrfam": "ipv4", 00:13:26.774 "trsvcid": "4420", 00:13:26.774 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:26.774 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:26.774 "prchk_reftag": false, 00:13:26.774 "prchk_guard": false, 00:13:26.774 "hdgst": false, 00:13:26.774 "ddgst": false, 00:13:26.774 "psk": "key0", 00:13:26.774 "allow_unrecognized_csi": false, 00:13:26.774 "method": "bdev_nvme_attach_controller", 00:13:26.774 "req_id": 1 00:13:26.774 } 00:13:26.774 Got JSON-RPC error response 00:13:26.774 response: 00:13:26.774 { 00:13:26.774 "code": -5, 00:13:26.774 "message": "Input/output error" 00:13:26.774 } 00:13:26.774 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71646 00:13:26.774 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71646 ']' 00:13:26.774 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71646 00:13:26.774 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:13:26.774 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:26.774 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71646 00:13:26.774 killing process with pid 71646 00:13:26.774 Received shutdown signal, test time was about 10.000000 seconds 00:13:26.774 00:13:26.774 Latency(us) 00:13:26.774 [2024-11-15T10:31:27.627Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:26.774 [2024-11-15T10:31:27.627Z] =================================================================================================================== 00:13:26.774 [2024-11-15T10:31:27.627Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:26.774 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:13:26.774 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:13:26.774 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71646' 00:13:26.774 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71646 00:13:26.774 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71646 00:13:27.032 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:27.032 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:13:27.032 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:27.032 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:27.032 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:27.032 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.TARIjO0WbZ 00:13:27.032 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:13:27.032 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.TARIjO0WbZ 00:13:27.033 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:13:27.033 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:27.033 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:13:27.033 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:27.033 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.TARIjO0WbZ 00:13:27.033 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:27.033 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:27.033 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:13:27.033 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.TARIjO0WbZ 00:13:27.033 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:27.033 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71671 00:13:27.033 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:27.033 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:27.033 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71671 /var/tmp/bdevperf.sock 00:13:27.033 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71671 ']' 00:13:27.033 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:27.033 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:27.033 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:27.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:27.033 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:27.033 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:27.033 [2024-11-15 10:31:27.774018] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:13:27.033 [2024-11-15 10:31:27.774287] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71671 ] 00:13:27.291 [2024-11-15 10:31:27.918882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:27.291 [2024-11-15 10:31:27.979736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:27.291 [2024-11-15 10:31:28.032388] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:27.291 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:27.291 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:13:27.291 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.TARIjO0WbZ 00:13:27.550 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:13:27.808 [2024-11-15 10:31:28.641882] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:27.808 [2024-11-15 10:31:28.649690] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:13:27.808 [2024-11-15 10:31:28.649933] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:13:27.808 [2024-11-15 10:31:28.650274] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:27.808 [2024-11-15 10:31:28.650817] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23aefb0 (107): Transport endpoint is not connected 00:13:27.808 [2024-11-15 10:31:28.651806] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23aefb0 (9): Bad file descriptor 00:13:27.808 [2024-11-15 10:31:28.652803] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:13:27.808 [2024-11-15 10:31:28.652980] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:13:27.808 [2024-11-15 10:31:28.652999] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:13:27.809 [2024-11-15 10:31:28.653019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:13:27.809 request: 00:13:27.809 { 00:13:27.809 "name": "TLSTEST", 00:13:27.809 "trtype": "tcp", 00:13:27.809 "traddr": "10.0.0.3", 00:13:27.809 "adrfam": "ipv4", 00:13:27.809 "trsvcid": "4420", 00:13:27.809 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:27.809 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:13:27.809 "prchk_reftag": false, 00:13:27.809 "prchk_guard": false, 00:13:27.809 "hdgst": false, 00:13:27.809 "ddgst": false, 00:13:27.809 "psk": "key0", 00:13:27.809 "allow_unrecognized_csi": false, 00:13:27.809 "method": "bdev_nvme_attach_controller", 00:13:27.809 "req_id": 1 00:13:27.809 } 00:13:27.809 Got JSON-RPC error response 00:13:27.809 response: 00:13:27.809 { 00:13:27.809 "code": -5, 00:13:27.809 "message": "Input/output error" 00:13:27.809 } 00:13:28.067 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71671 00:13:28.067 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71671 ']' 00:13:28.067 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71671 00:13:28.067 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:13:28.067 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:28.067 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71671 00:13:28.067 killing process with pid 71671 00:13:28.067 Received shutdown signal, test time was about 10.000000 seconds 00:13:28.067 00:13:28.067 Latency(us) 00:13:28.067 [2024-11-15T10:31:28.920Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:28.067 [2024-11-15T10:31:28.920Z] =================================================================================================================== 00:13:28.067 [2024-11-15T10:31:28.920Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:28.067 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:13:28.067 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:13:28.067 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71671' 00:13:28.067 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71671 00:13:28.067 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71671 00:13:28.067 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:28.067 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:13:28.067 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:28.067 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:28.067 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:28.067 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.TARIjO0WbZ 00:13:28.067 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:13:28.068 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.TARIjO0WbZ 00:13:28.068 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:13:28.068 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:28.068 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:13:28.068 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:28.068 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.TARIjO0WbZ 00:13:28.068 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:28.068 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:13:28.068 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:28.068 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.TARIjO0WbZ 00:13:28.068 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:28.068 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71692 00:13:28.068 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:28.068 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:28.068 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71692 /var/tmp/bdevperf.sock 00:13:28.068 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71692 ']' 00:13:28.068 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:28.068 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:28.068 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:28.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:28.068 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:28.068 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:28.326 [2024-11-15 10:31:28.944444] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:13:28.326 [2024-11-15 10:31:28.944551] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71692 ] 00:13:28.326 [2024-11-15 10:31:29.090928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:28.326 [2024-11-15 10:31:29.151974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:28.584 [2024-11-15 10:31:29.204585] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:28.584 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:28.584 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:13:28.584 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.TARIjO0WbZ 00:13:28.842 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:29.100 [2024-11-15 10:31:29.755310] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:29.100 [2024-11-15 10:31:29.760212] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:13:29.100 [2024-11-15 10:31:29.760260] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:13:29.100 [2024-11-15 10:31:29.760318] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:29.100 [2024-11-15 10:31:29.760938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x126ffb0 (107): Transport endpoint is not connected 00:13:29.100 [2024-11-15 10:31:29.761928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x126ffb0 (9): Bad file descriptor 00:13:29.100 [2024-11-15 10:31:29.762924] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:13:29.101 [2024-11-15 10:31:29.762951] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:13:29.101 [2024-11-15 10:31:29.762963] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:13:29.101 [2024-11-15 10:31:29.762980] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:13:29.101 request: 00:13:29.101 { 00:13:29.101 "name": "TLSTEST", 00:13:29.101 "trtype": "tcp", 00:13:29.101 "traddr": "10.0.0.3", 00:13:29.101 "adrfam": "ipv4", 00:13:29.101 "trsvcid": "4420", 00:13:29.101 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:13:29.101 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:29.101 "prchk_reftag": false, 00:13:29.101 "prchk_guard": false, 00:13:29.101 "hdgst": false, 00:13:29.101 "ddgst": false, 00:13:29.101 "psk": "key0", 00:13:29.101 "allow_unrecognized_csi": false, 00:13:29.101 "method": "bdev_nvme_attach_controller", 00:13:29.101 "req_id": 1 00:13:29.101 } 00:13:29.101 Got JSON-RPC error response 00:13:29.101 response: 00:13:29.101 { 00:13:29.101 "code": -5, 00:13:29.101 "message": "Input/output error" 00:13:29.101 } 00:13:29.101 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71692 00:13:29.101 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71692 ']' 00:13:29.101 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71692 00:13:29.101 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:13:29.101 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:29.101 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71692 00:13:29.101 killing process with pid 71692 00:13:29.101 Received shutdown signal, test time was about 10.000000 seconds 00:13:29.101 00:13:29.101 Latency(us) 00:13:29.101 [2024-11-15T10:31:29.954Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:29.101 [2024-11-15T10:31:29.954Z] =================================================================================================================== 00:13:29.101 [2024-11-15T10:31:29.954Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:29.101 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:13:29.101 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:13:29.101 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71692' 00:13:29.101 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71692 00:13:29.101 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71692 00:13:29.359 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:29.359 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:13:29.359 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:29.359 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:29.359 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:29.359 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:29.359 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:13:29.359 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:29.359 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:13:29.359 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:29.359 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:13:29.359 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:29.359 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:29.359 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:29.359 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:29.359 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:29.359 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:13:29.359 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:29.359 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71719 00:13:29.359 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:29.359 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:29.359 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71719 /var/tmp/bdevperf.sock 00:13:29.359 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71719 ']' 00:13:29.359 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:29.359 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:29.359 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:29.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:29.359 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:29.359 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:29.359 [2024-11-15 10:31:30.045097] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:13:29.359 [2024-11-15 10:31:30.045363] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71719 ] 00:13:29.359 [2024-11-15 10:31:30.190464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:29.617 [2024-11-15 10:31:30.250634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:29.617 [2024-11-15 10:31:30.303840] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:29.617 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:29.617 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:13:29.617 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:13:29.875 [2024-11-15 10:31:30.607054] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:13:29.875 [2024-11-15 10:31:30.607138] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:13:29.875 request: 00:13:29.875 { 00:13:29.875 "name": "key0", 00:13:29.875 "path": "", 00:13:29.875 "method": "keyring_file_add_key", 00:13:29.875 "req_id": 1 00:13:29.875 } 00:13:29.875 Got JSON-RPC error response 00:13:29.875 response: 00:13:29.875 { 00:13:29.875 "code": -1, 00:13:29.875 "message": "Operation not permitted" 00:13:29.875 } 00:13:29.875 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:30.134 [2024-11-15 10:31:30.863259] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:30.134 [2024-11-15 10:31:30.863353] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:13:30.134 request: 00:13:30.134 { 00:13:30.134 "name": "TLSTEST", 00:13:30.134 "trtype": "tcp", 00:13:30.134 "traddr": "10.0.0.3", 00:13:30.134 "adrfam": "ipv4", 00:13:30.134 "trsvcid": "4420", 00:13:30.134 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:30.134 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:30.134 "prchk_reftag": false, 00:13:30.134 "prchk_guard": false, 00:13:30.134 "hdgst": false, 00:13:30.134 "ddgst": false, 00:13:30.134 "psk": "key0", 00:13:30.134 "allow_unrecognized_csi": false, 00:13:30.134 "method": "bdev_nvme_attach_controller", 00:13:30.134 "req_id": 1 00:13:30.134 } 00:13:30.134 Got JSON-RPC error response 00:13:30.134 response: 00:13:30.134 { 00:13:30.134 "code": -126, 00:13:30.134 "message": "Required key not available" 00:13:30.134 } 00:13:30.134 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71719 00:13:30.134 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71719 ']' 00:13:30.134 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71719 00:13:30.134 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:13:30.134 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:30.134 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71719 00:13:30.134 killing process with pid 71719 00:13:30.134 Received shutdown signal, test time was about 10.000000 seconds 00:13:30.134 00:13:30.134 Latency(us) 00:13:30.134 [2024-11-15T10:31:30.987Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:30.134 [2024-11-15T10:31:30.987Z] =================================================================================================================== 00:13:30.134 [2024-11-15T10:31:30.987Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:30.134 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:13:30.134 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:13:30.134 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71719' 00:13:30.134 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71719 00:13:30.134 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71719 00:13:30.393 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:30.393 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:13:30.393 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:30.393 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:30.393 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:30.393 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 71269 00:13:30.393 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71269 ']' 00:13:30.393 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71269 00:13:30.393 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:13:30.393 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:30.393 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71269 00:13:30.393 killing process with pid 71269 00:13:30.393 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:13:30.393 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:13:30.393 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71269' 00:13:30.393 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71269 00:13:30.393 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71269 00:13:30.652 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:13:30.652 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:13:30.652 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:13:30.652 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:13:30.652 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:13:30.652 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:13:30.652 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:13:30.652 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:13:30.652 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:13:30.652 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.FC9YzIT312 00:13:30.652 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:13:30.652 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.FC9YzIT312 00:13:30.652 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:13:30.652 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:30.652 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:30.652 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:30.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:30.652 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71751 00:13:30.652 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71751 00:13:30.652 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:30.652 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71751 ']' 00:13:30.652 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:30.652 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:30.652 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:30.652 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:30.652 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:30.652 [2024-11-15 10:31:31.450765] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:13:30.652 [2024-11-15 10:31:31.451122] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:30.910 [2024-11-15 10:31:31.602125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:30.910 [2024-11-15 10:31:31.660485] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:30.910 [2024-11-15 10:31:31.660762] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:30.910 [2024-11-15 10:31:31.660919] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:30.910 [2024-11-15 10:31:31.661115] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:30.910 [2024-11-15 10:31:31.661273] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:30.910 [2024-11-15 10:31:31.661807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:30.910 [2024-11-15 10:31:31.716592] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:31.844 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:31.844 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:13:31.844 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:31.844 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:31.844 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:31.844 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:31.844 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.FC9YzIT312 00:13:31.844 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.FC9YzIT312 00:13:31.844 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:32.102 [2024-11-15 10:31:32.735774] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:32.102 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:32.360 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:13:32.619 [2024-11-15 10:31:33.331886] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:32.619 [2024-11-15 10:31:33.332468] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:32.619 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:32.887 malloc0 00:13:32.887 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:33.144 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.FC9YzIT312 00:13:33.403 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:13:33.661 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.FC9YzIT312 00:13:33.661 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:33.661 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:33.661 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:33.661 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.FC9YzIT312 00:13:33.661 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:33.661 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71807 00:13:33.661 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:33.661 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:33.661 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71807 /var/tmp/bdevperf.sock 00:13:33.661 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71807 ']' 00:13:33.661 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:33.661 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:33.661 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:33.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:33.661 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:33.661 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:33.920 [2024-11-15 10:31:34.535547] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:13:33.920 [2024-11-15 10:31:34.535949] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71807 ] 00:13:33.920 [2024-11-15 10:31:34.688343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:33.920 [2024-11-15 10:31:34.755959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:34.182 [2024-11-15 10:31:34.813068] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:34.749 10:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:34.749 10:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:13:34.749 10:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.FC9YzIT312 00:13:35.007 10:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:35.264 [2024-11-15 10:31:36.043274] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:35.264 TLSTESTn1 00:13:35.522 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:35.522 Running I/O for 10 seconds... 00:13:37.395 3303.00 IOPS, 12.90 MiB/s [2024-11-15T10:31:39.624Z] 3317.50 IOPS, 12.96 MiB/s [2024-11-15T10:31:40.562Z] 3323.33 IOPS, 12.98 MiB/s [2024-11-15T10:31:41.497Z] 3325.25 IOPS, 12.99 MiB/s [2024-11-15T10:31:42.431Z] 3326.00 IOPS, 12.99 MiB/s [2024-11-15T10:31:43.368Z] 3327.50 IOPS, 13.00 MiB/s [2024-11-15T10:31:44.331Z] 3327.14 IOPS, 13.00 MiB/s [2024-11-15T10:31:45.263Z] 3328.00 IOPS, 13.00 MiB/s [2024-11-15T10:31:46.641Z] 3328.00 IOPS, 13.00 MiB/s [2024-11-15T10:31:46.641Z] 3329.30 IOPS, 13.01 MiB/s 00:13:45.788 Latency(us) 00:13:45.788 [2024-11-15T10:31:46.641Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:45.788 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:45.788 Verification LBA range: start 0x0 length 0x2000 00:13:45.788 TLSTESTn1 : 10.02 3334.67 13.03 0.00 0.00 38302.85 1377.75 23831.27 00:13:45.788 [2024-11-15T10:31:46.641Z] =================================================================================================================== 00:13:45.788 [2024-11-15T10:31:46.641Z] Total : 3334.67 13.03 0.00 0.00 38302.85 1377.75 23831.27 00:13:45.788 { 00:13:45.788 "results": [ 00:13:45.788 { 00:13:45.788 "job": "TLSTESTn1", 00:13:45.788 "core_mask": "0x4", 00:13:45.788 "workload": "verify", 00:13:45.788 "status": "finished", 00:13:45.788 "verify_range": { 00:13:45.788 "start": 0, 00:13:45.788 "length": 8192 00:13:45.788 }, 00:13:45.788 "queue_depth": 128, 00:13:45.788 "io_size": 4096, 00:13:45.788 "runtime": 10.021391, 00:13:45.789 "iops": 3334.666814217707, 00:13:45.789 "mibps": 13.026042243037917, 00:13:45.789 "io_failed": 0, 00:13:45.789 "io_timeout": 0, 00:13:45.789 "avg_latency_us": 38302.85295915647, 00:13:45.789 "min_latency_us": 1377.7454545454545, 00:13:45.789 "max_latency_us": 23831.272727272728 00:13:45.789 } 00:13:45.789 ], 00:13:45.789 "core_count": 1 00:13:45.789 } 00:13:45.789 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:45.789 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71807 00:13:45.789 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71807 ']' 00:13:45.789 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71807 00:13:45.789 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:13:45.789 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:45.789 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71807 00:13:45.789 killing process with pid 71807 00:13:45.789 Received shutdown signal, test time was about 10.000000 seconds 00:13:45.789 00:13:45.789 Latency(us) 00:13:45.789 [2024-11-15T10:31:46.642Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:45.789 [2024-11-15T10:31:46.642Z] =================================================================================================================== 00:13:45.789 [2024-11-15T10:31:46.642Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:45.789 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:13:45.789 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:13:45.789 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71807' 00:13:45.789 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71807 00:13:45.789 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71807 00:13:45.789 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.FC9YzIT312 00:13:45.789 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.FC9YzIT312 00:13:45.789 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:13:45.789 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.FC9YzIT312 00:13:45.789 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:13:45.789 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:45.789 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:13:45.789 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:45.789 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.FC9YzIT312 00:13:45.789 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:45.789 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:45.789 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:45.789 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.FC9YzIT312 00:13:45.789 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:45.789 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71948 00:13:45.789 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:45.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:45.789 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:45.789 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71948 /var/tmp/bdevperf.sock 00:13:45.789 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71948 ']' 00:13:45.789 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:45.789 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:45.789 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:45.789 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:45.789 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:45.789 [2024-11-15 10:31:46.588689] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:13:45.789 [2024-11-15 10:31:46.589103] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71948 ] 00:13:46.048 [2024-11-15 10:31:46.742018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:46.048 [2024-11-15 10:31:46.804687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:46.048 [2024-11-15 10:31:46.884293] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:46.308 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:46.308 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:13:46.308 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.FC9YzIT312 00:13:46.567 [2024-11-15 10:31:47.257810] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.FC9YzIT312': 0100666 00:13:46.567 [2024-11-15 10:31:47.258085] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:13:46.567 request: 00:13:46.567 { 00:13:46.567 "name": "key0", 00:13:46.567 "path": "/tmp/tmp.FC9YzIT312", 00:13:46.567 "method": "keyring_file_add_key", 00:13:46.567 "req_id": 1 00:13:46.567 } 00:13:46.567 Got JSON-RPC error response 00:13:46.567 response: 00:13:46.567 { 00:13:46.567 "code": -1, 00:13:46.567 "message": "Operation not permitted" 00:13:46.567 } 00:13:46.567 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:46.949 [2024-11-15 10:31:47.510036] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:46.949 [2024-11-15 10:31:47.510161] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:13:46.949 request: 00:13:46.949 { 00:13:46.949 "name": "TLSTEST", 00:13:46.949 "trtype": "tcp", 00:13:46.949 "traddr": "10.0.0.3", 00:13:46.949 "adrfam": "ipv4", 00:13:46.949 "trsvcid": "4420", 00:13:46.949 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:46.949 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:46.949 "prchk_reftag": false, 00:13:46.949 "prchk_guard": false, 00:13:46.949 "hdgst": false, 00:13:46.949 "ddgst": false, 00:13:46.949 "psk": "key0", 00:13:46.949 "allow_unrecognized_csi": false, 00:13:46.949 "method": "bdev_nvme_attach_controller", 00:13:46.949 "req_id": 1 00:13:46.949 } 00:13:46.949 Got JSON-RPC error response 00:13:46.949 response: 00:13:46.949 { 00:13:46.949 "code": -126, 00:13:46.949 "message": "Required key not available" 00:13:46.949 } 00:13:46.949 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71948 00:13:46.949 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71948 ']' 00:13:46.949 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71948 00:13:46.949 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:13:46.949 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:46.949 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71948 00:13:46.949 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:13:46.949 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:13:46.949 killing process with pid 71948 00:13:46.949 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71948' 00:13:46.949 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71948 00:13:46.949 Received shutdown signal, test time was about 10.000000 seconds 00:13:46.949 00:13:46.949 Latency(us) 00:13:46.949 [2024-11-15T10:31:47.802Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:46.949 [2024-11-15T10:31:47.802Z] =================================================================================================================== 00:13:46.949 [2024-11-15T10:31:47.802Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:46.949 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71948 00:13:46.949 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:46.949 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:13:46.949 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:46.949 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:46.949 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:46.949 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 71751 00:13:46.949 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71751 ']' 00:13:46.949 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71751 00:13:46.949 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:13:47.210 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:47.210 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71751 00:13:47.210 killing process with pid 71751 00:13:47.210 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:13:47.210 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:13:47.210 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71751' 00:13:47.210 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71751 00:13:47.210 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71751 00:13:47.210 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:13:47.210 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:47.210 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:47.210 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:47.210 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71974 00:13:47.210 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:47.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:47.210 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71974 00:13:47.210 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71974 ']' 00:13:47.210 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:47.210 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:47.210 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:47.210 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:47.210 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:47.469 [2024-11-15 10:31:48.120639] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:13:47.469 [2024-11-15 10:31:48.121085] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:47.469 [2024-11-15 10:31:48.272875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.729 [2024-11-15 10:31:48.335134] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:47.729 [2024-11-15 10:31:48.335365] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:47.729 [2024-11-15 10:31:48.335499] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:47.729 [2024-11-15 10:31:48.335514] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:47.729 [2024-11-15 10:31:48.335523] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:47.729 [2024-11-15 10:31:48.335974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:47.729 [2024-11-15 10:31:48.391474] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:48.297 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:48.297 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:13:48.297 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:48.297 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:48.297 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:48.297 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:48.297 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.FC9YzIT312 00:13:48.297 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:13:48.297 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.FC9YzIT312 00:13:48.297 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:13:48.297 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:48.297 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:13:48.297 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:48.297 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.FC9YzIT312 00:13:48.297 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.FC9YzIT312 00:13:48.297 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:48.865 [2024-11-15 10:31:49.412647] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:48.865 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:49.124 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:13:49.383 [2024-11-15 10:31:50.000773] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:49.383 [2024-11-15 10:31:50.001077] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:49.383 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:49.642 malloc0 00:13:49.642 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:49.901 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.FC9YzIT312 00:13:50.159 [2024-11-15 10:31:50.891482] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.FC9YzIT312': 0100666 00:13:50.159 [2024-11-15 10:31:50.891900] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:13:50.159 request: 00:13:50.159 { 00:13:50.159 "name": "key0", 00:13:50.159 "path": "/tmp/tmp.FC9YzIT312", 00:13:50.159 "method": "keyring_file_add_key", 00:13:50.159 "req_id": 1 00:13:50.159 } 00:13:50.159 Got JSON-RPC error response 00:13:50.159 response: 00:13:50.159 { 00:13:50.159 "code": -1, 00:13:50.159 "message": "Operation not permitted" 00:13:50.159 } 00:13:50.159 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:13:50.418 [2024-11-15 10:31:51.199597] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:13:50.418 [2024-11-15 10:31:51.199719] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:13:50.418 request: 00:13:50.418 { 00:13:50.418 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:50.418 "host": "nqn.2016-06.io.spdk:host1", 00:13:50.418 "psk": "key0", 00:13:50.418 "method": "nvmf_subsystem_add_host", 00:13:50.418 "req_id": 1 00:13:50.418 } 00:13:50.418 Got JSON-RPC error response 00:13:50.418 response: 00:13:50.418 { 00:13:50.418 "code": -32603, 00:13:50.418 "message": "Internal error" 00:13:50.418 } 00:13:50.418 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:13:50.418 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:50.418 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:50.418 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:50.418 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 71974 00:13:50.418 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71974 ']' 00:13:50.418 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71974 00:13:50.418 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:13:50.418 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:50.418 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71974 00:13:50.419 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:13:50.419 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:13:50.419 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71974' 00:13:50.419 killing process with pid 71974 00:13:50.419 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71974 00:13:50.419 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71974 00:13:50.996 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.FC9YzIT312 00:13:50.996 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:13:50.996 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:50.996 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:50.996 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:50.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:50.996 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72049 00:13:50.996 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:50.996 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72049 00:13:50.996 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 72049 ']' 00:13:50.996 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:50.996 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:50.996 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:50.996 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:50.996 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:50.996 [2024-11-15 10:31:51.609047] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:13:50.996 [2024-11-15 10:31:51.609414] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:50.996 [2024-11-15 10:31:51.754911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.996 [2024-11-15 10:31:51.831957] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:50.996 [2024-11-15 10:31:51.832320] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:50.996 [2024-11-15 10:31:51.832463] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:50.996 [2024-11-15 10:31:51.832607] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:50.996 [2024-11-15 10:31:51.832647] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:50.996 [2024-11-15 10:31:51.833203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:51.254 [2024-11-15 10:31:51.904901] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:51.254 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:51.254 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:13:51.254 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:51.254 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:51.254 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:51.254 10:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:51.254 10:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.FC9YzIT312 00:13:51.254 10:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.FC9YzIT312 00:13:51.254 10:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:51.514 [2024-11-15 10:31:52.288789] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:51.514 10:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:51.772 10:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:13:52.031 [2024-11-15 10:31:52.852980] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:52.031 [2024-11-15 10:31:52.853607] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:52.031 10:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:52.598 malloc0 00:13:52.598 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:52.858 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.FC9YzIT312 00:13:52.858 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:13:53.117 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:53.117 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=72097 00:13:53.117 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:53.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:53.117 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 72097 /var/tmp/bdevperf.sock 00:13:53.117 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 72097 ']' 00:13:53.117 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:53.117 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:53.117 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:53.117 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:53.117 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:53.376 [2024-11-15 10:31:54.018187] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:13:53.376 [2024-11-15 10:31:54.018560] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72097 ] 00:13:53.376 [2024-11-15 10:31:54.169287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.635 [2024-11-15 10:31:54.240506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:53.635 [2024-11-15 10:31:54.298562] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:54.203 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:54.203 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:13:54.203 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.FC9YzIT312 00:13:54.464 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:54.755 [2024-11-15 10:31:55.509239] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:54.756 TLSTESTn1 00:13:54.756 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:13:55.325 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:13:55.325 "subsystems": [ 00:13:55.325 { 00:13:55.325 "subsystem": "keyring", 00:13:55.325 "config": [ 00:13:55.325 { 00:13:55.325 "method": "keyring_file_add_key", 00:13:55.325 "params": { 00:13:55.325 "name": "key0", 00:13:55.325 "path": "/tmp/tmp.FC9YzIT312" 00:13:55.325 } 00:13:55.325 } 00:13:55.325 ] 00:13:55.325 }, 00:13:55.325 { 00:13:55.325 "subsystem": "iobuf", 00:13:55.325 "config": [ 00:13:55.325 { 00:13:55.325 "method": "iobuf_set_options", 00:13:55.325 "params": { 00:13:55.325 "small_pool_count": 8192, 00:13:55.325 "large_pool_count": 1024, 00:13:55.325 "small_bufsize": 8192, 00:13:55.325 "large_bufsize": 135168, 00:13:55.325 "enable_numa": false 00:13:55.325 } 00:13:55.325 } 00:13:55.325 ] 00:13:55.325 }, 00:13:55.325 { 00:13:55.325 "subsystem": "sock", 00:13:55.325 "config": [ 00:13:55.325 { 00:13:55.325 "method": "sock_set_default_impl", 00:13:55.325 "params": { 00:13:55.325 "impl_name": "uring" 00:13:55.325 } 00:13:55.325 }, 00:13:55.325 { 00:13:55.325 "method": "sock_impl_set_options", 00:13:55.325 "params": { 00:13:55.325 "impl_name": "ssl", 00:13:55.325 "recv_buf_size": 4096, 00:13:55.325 "send_buf_size": 4096, 00:13:55.325 "enable_recv_pipe": true, 00:13:55.325 "enable_quickack": false, 00:13:55.325 "enable_placement_id": 0, 00:13:55.325 "enable_zerocopy_send_server": true, 00:13:55.325 "enable_zerocopy_send_client": false, 00:13:55.325 "zerocopy_threshold": 0, 00:13:55.326 "tls_version": 0, 00:13:55.326 "enable_ktls": false 00:13:55.326 } 00:13:55.326 }, 00:13:55.326 { 00:13:55.326 "method": "sock_impl_set_options", 00:13:55.326 "params": { 00:13:55.326 "impl_name": "posix", 00:13:55.326 "recv_buf_size": 2097152, 00:13:55.326 "send_buf_size": 2097152, 00:13:55.326 "enable_recv_pipe": true, 00:13:55.326 "enable_quickack": false, 00:13:55.326 "enable_placement_id": 0, 00:13:55.326 "enable_zerocopy_send_server": true, 00:13:55.326 "enable_zerocopy_send_client": false, 00:13:55.326 "zerocopy_threshold": 0, 00:13:55.326 "tls_version": 0, 00:13:55.326 "enable_ktls": false 00:13:55.326 } 00:13:55.326 }, 00:13:55.326 { 00:13:55.326 "method": "sock_impl_set_options", 00:13:55.326 "params": { 00:13:55.326 "impl_name": "uring", 00:13:55.326 "recv_buf_size": 2097152, 00:13:55.326 "send_buf_size": 2097152, 00:13:55.326 "enable_recv_pipe": true, 00:13:55.326 "enable_quickack": false, 00:13:55.326 "enable_placement_id": 0, 00:13:55.326 "enable_zerocopy_send_server": false, 00:13:55.326 "enable_zerocopy_send_client": false, 00:13:55.326 "zerocopy_threshold": 0, 00:13:55.326 "tls_version": 0, 00:13:55.326 "enable_ktls": false 00:13:55.326 } 00:13:55.326 } 00:13:55.326 ] 00:13:55.326 }, 00:13:55.326 { 00:13:55.326 "subsystem": "vmd", 00:13:55.326 "config": [] 00:13:55.326 }, 00:13:55.326 { 00:13:55.326 "subsystem": "accel", 00:13:55.326 "config": [ 00:13:55.326 { 00:13:55.326 "method": "accel_set_options", 00:13:55.326 "params": { 00:13:55.326 "small_cache_size": 128, 00:13:55.326 "large_cache_size": 16, 00:13:55.326 "task_count": 2048, 00:13:55.326 "sequence_count": 2048, 00:13:55.326 "buf_count": 2048 00:13:55.326 } 00:13:55.326 } 00:13:55.326 ] 00:13:55.326 }, 00:13:55.326 { 00:13:55.326 "subsystem": "bdev", 00:13:55.326 "config": [ 00:13:55.326 { 00:13:55.326 "method": "bdev_set_options", 00:13:55.326 "params": { 00:13:55.326 "bdev_io_pool_size": 65535, 00:13:55.326 "bdev_io_cache_size": 256, 00:13:55.326 "bdev_auto_examine": true, 00:13:55.326 "iobuf_small_cache_size": 128, 00:13:55.326 "iobuf_large_cache_size": 16 00:13:55.326 } 00:13:55.326 }, 00:13:55.326 { 00:13:55.326 "method": "bdev_raid_set_options", 00:13:55.326 "params": { 00:13:55.326 "process_window_size_kb": 1024, 00:13:55.326 "process_max_bandwidth_mb_sec": 0 00:13:55.326 } 00:13:55.326 }, 00:13:55.326 { 00:13:55.326 "method": "bdev_iscsi_set_options", 00:13:55.326 "params": { 00:13:55.326 "timeout_sec": 30 00:13:55.326 } 00:13:55.326 }, 00:13:55.326 { 00:13:55.326 "method": "bdev_nvme_set_options", 00:13:55.326 "params": { 00:13:55.326 "action_on_timeout": "none", 00:13:55.326 "timeout_us": 0, 00:13:55.326 "timeout_admin_us": 0, 00:13:55.326 "keep_alive_timeout_ms": 10000, 00:13:55.326 "arbitration_burst": 0, 00:13:55.326 "low_priority_weight": 0, 00:13:55.326 "medium_priority_weight": 0, 00:13:55.326 "high_priority_weight": 0, 00:13:55.326 "nvme_adminq_poll_period_us": 10000, 00:13:55.326 "nvme_ioq_poll_period_us": 0, 00:13:55.326 "io_queue_requests": 0, 00:13:55.326 "delay_cmd_submit": true, 00:13:55.326 "transport_retry_count": 4, 00:13:55.326 "bdev_retry_count": 3, 00:13:55.326 "transport_ack_timeout": 0, 00:13:55.326 "ctrlr_loss_timeout_sec": 0, 00:13:55.326 "reconnect_delay_sec": 0, 00:13:55.326 "fast_io_fail_timeout_sec": 0, 00:13:55.326 "disable_auto_failback": false, 00:13:55.326 "generate_uuids": false, 00:13:55.326 "transport_tos": 0, 00:13:55.326 "nvme_error_stat": false, 00:13:55.326 "rdma_srq_size": 0, 00:13:55.326 "io_path_stat": false, 00:13:55.326 "allow_accel_sequence": false, 00:13:55.326 "rdma_max_cq_size": 0, 00:13:55.326 "rdma_cm_event_timeout_ms": 0, 00:13:55.326 "dhchap_digests": [ 00:13:55.326 "sha256", 00:13:55.326 "sha384", 00:13:55.326 "sha512" 00:13:55.326 ], 00:13:55.326 "dhchap_dhgroups": [ 00:13:55.326 "null", 00:13:55.326 "ffdhe2048", 00:13:55.326 "ffdhe3072", 00:13:55.326 "ffdhe4096", 00:13:55.326 "ffdhe6144", 00:13:55.326 "ffdhe8192" 00:13:55.326 ] 00:13:55.326 } 00:13:55.326 }, 00:13:55.326 { 00:13:55.326 "method": "bdev_nvme_set_hotplug", 00:13:55.326 "params": { 00:13:55.326 "period_us": 100000, 00:13:55.326 "enable": false 00:13:55.326 } 00:13:55.326 }, 00:13:55.326 { 00:13:55.326 "method": "bdev_malloc_create", 00:13:55.326 "params": { 00:13:55.326 "name": "malloc0", 00:13:55.326 "num_blocks": 8192, 00:13:55.326 "block_size": 4096, 00:13:55.326 "physical_block_size": 4096, 00:13:55.326 "uuid": "457fa6b5-ba2f-4d6e-b649-6d209a2b3cd6", 00:13:55.326 "optimal_io_boundary": 0, 00:13:55.326 "md_size": 0, 00:13:55.326 "dif_type": 0, 00:13:55.326 "dif_is_head_of_md": false, 00:13:55.326 "dif_pi_format": 0 00:13:55.326 } 00:13:55.326 }, 00:13:55.326 { 00:13:55.326 "method": "bdev_wait_for_examine" 00:13:55.326 } 00:13:55.326 ] 00:13:55.326 }, 00:13:55.326 { 00:13:55.326 "subsystem": "nbd", 00:13:55.326 "config": [] 00:13:55.326 }, 00:13:55.326 { 00:13:55.326 "subsystem": "scheduler", 00:13:55.326 "config": [ 00:13:55.326 { 00:13:55.326 "method": "framework_set_scheduler", 00:13:55.326 "params": { 00:13:55.326 "name": "static" 00:13:55.326 } 00:13:55.326 } 00:13:55.326 ] 00:13:55.326 }, 00:13:55.326 { 00:13:55.326 "subsystem": "nvmf", 00:13:55.326 "config": [ 00:13:55.326 { 00:13:55.326 "method": "nvmf_set_config", 00:13:55.326 "params": { 00:13:55.326 "discovery_filter": "match_any", 00:13:55.326 "admin_cmd_passthru": { 00:13:55.326 "identify_ctrlr": false 00:13:55.326 }, 00:13:55.326 "dhchap_digests": [ 00:13:55.326 "sha256", 00:13:55.326 "sha384", 00:13:55.326 "sha512" 00:13:55.327 ], 00:13:55.327 "dhchap_dhgroups": [ 00:13:55.327 "null", 00:13:55.327 "ffdhe2048", 00:13:55.327 "ffdhe3072", 00:13:55.327 "ffdhe4096", 00:13:55.327 "ffdhe6144", 00:13:55.327 "ffdhe8192" 00:13:55.327 ] 00:13:55.327 } 00:13:55.327 }, 00:13:55.327 { 00:13:55.327 "method": "nvmf_set_max_subsystems", 00:13:55.327 "params": { 00:13:55.327 "max_subsystems": 1024 00:13:55.327 } 00:13:55.327 }, 00:13:55.327 { 00:13:55.327 "method": "nvmf_set_crdt", 00:13:55.327 "params": { 00:13:55.327 "crdt1": 0, 00:13:55.327 "crdt2": 0, 00:13:55.327 "crdt3": 0 00:13:55.327 } 00:13:55.327 }, 00:13:55.327 { 00:13:55.327 "method": "nvmf_create_transport", 00:13:55.327 "params": { 00:13:55.327 "trtype": "TCP", 00:13:55.327 "max_queue_depth": 128, 00:13:55.327 "max_io_qpairs_per_ctrlr": 127, 00:13:55.327 "in_capsule_data_size": 4096, 00:13:55.327 "max_io_size": 131072, 00:13:55.327 "io_unit_size": 131072, 00:13:55.327 "max_aq_depth": 128, 00:13:55.327 "num_shared_buffers": 511, 00:13:55.327 "buf_cache_size": 4294967295, 00:13:55.327 "dif_insert_or_strip": false, 00:13:55.327 "zcopy": false, 00:13:55.327 "c2h_success": false, 00:13:55.327 "sock_priority": 0, 00:13:55.327 "abort_timeout_sec": 1, 00:13:55.327 "ack_timeout": 0, 00:13:55.327 "data_wr_pool_size": 0 00:13:55.327 } 00:13:55.327 }, 00:13:55.327 { 00:13:55.327 "method": "nvmf_create_subsystem", 00:13:55.327 "params": { 00:13:55.327 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:55.327 "allow_any_host": false, 00:13:55.327 "serial_number": "SPDK00000000000001", 00:13:55.327 "model_number": "SPDK bdev Controller", 00:13:55.327 "max_namespaces": 10, 00:13:55.327 "min_cntlid": 1, 00:13:55.327 "max_cntlid": 65519, 00:13:55.327 "ana_reporting": false 00:13:55.327 } 00:13:55.327 }, 00:13:55.327 { 00:13:55.327 "method": "nvmf_subsystem_add_host", 00:13:55.327 "params": { 00:13:55.327 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:55.327 "host": "nqn.2016-06.io.spdk:host1", 00:13:55.327 "psk": "key0" 00:13:55.327 } 00:13:55.327 }, 00:13:55.327 { 00:13:55.327 "method": "nvmf_subsystem_add_ns", 00:13:55.327 "params": { 00:13:55.327 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:55.327 "namespace": { 00:13:55.327 "nsid": 1, 00:13:55.327 "bdev_name": "malloc0", 00:13:55.327 "nguid": "457FA6B5BA2F4D6EB6496D209A2B3CD6", 00:13:55.327 "uuid": "457fa6b5-ba2f-4d6e-b649-6d209a2b3cd6", 00:13:55.327 "no_auto_visible": false 00:13:55.327 } 00:13:55.327 } 00:13:55.327 }, 00:13:55.327 { 00:13:55.327 "method": "nvmf_subsystem_add_listener", 00:13:55.327 "params": { 00:13:55.327 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:55.327 "listen_address": { 00:13:55.327 "trtype": "TCP", 00:13:55.327 "adrfam": "IPv4", 00:13:55.327 "traddr": "10.0.0.3", 00:13:55.327 "trsvcid": "4420" 00:13:55.327 }, 00:13:55.327 "secure_channel": true 00:13:55.327 } 00:13:55.327 } 00:13:55.327 ] 00:13:55.327 } 00:13:55.327 ] 00:13:55.327 }' 00:13:55.327 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:13:55.587 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:13:55.587 "subsystems": [ 00:13:55.587 { 00:13:55.587 "subsystem": "keyring", 00:13:55.587 "config": [ 00:13:55.587 { 00:13:55.587 "method": "keyring_file_add_key", 00:13:55.587 "params": { 00:13:55.587 "name": "key0", 00:13:55.587 "path": "/tmp/tmp.FC9YzIT312" 00:13:55.587 } 00:13:55.587 } 00:13:55.587 ] 00:13:55.587 }, 00:13:55.587 { 00:13:55.587 "subsystem": "iobuf", 00:13:55.587 "config": [ 00:13:55.587 { 00:13:55.587 "method": "iobuf_set_options", 00:13:55.587 "params": { 00:13:55.587 "small_pool_count": 8192, 00:13:55.587 "large_pool_count": 1024, 00:13:55.587 "small_bufsize": 8192, 00:13:55.587 "large_bufsize": 135168, 00:13:55.587 "enable_numa": false 00:13:55.587 } 00:13:55.587 } 00:13:55.587 ] 00:13:55.587 }, 00:13:55.587 { 00:13:55.587 "subsystem": "sock", 00:13:55.587 "config": [ 00:13:55.587 { 00:13:55.587 "method": "sock_set_default_impl", 00:13:55.587 "params": { 00:13:55.587 "impl_name": "uring" 00:13:55.587 } 00:13:55.587 }, 00:13:55.587 { 00:13:55.587 "method": "sock_impl_set_options", 00:13:55.587 "params": { 00:13:55.587 "impl_name": "ssl", 00:13:55.587 "recv_buf_size": 4096, 00:13:55.587 "send_buf_size": 4096, 00:13:55.587 "enable_recv_pipe": true, 00:13:55.587 "enable_quickack": false, 00:13:55.587 "enable_placement_id": 0, 00:13:55.587 "enable_zerocopy_send_server": true, 00:13:55.587 "enable_zerocopy_send_client": false, 00:13:55.587 "zerocopy_threshold": 0, 00:13:55.587 "tls_version": 0, 00:13:55.587 "enable_ktls": false 00:13:55.587 } 00:13:55.587 }, 00:13:55.587 { 00:13:55.587 "method": "sock_impl_set_options", 00:13:55.587 "params": { 00:13:55.587 "impl_name": "posix", 00:13:55.587 "recv_buf_size": 2097152, 00:13:55.587 "send_buf_size": 2097152, 00:13:55.587 "enable_recv_pipe": true, 00:13:55.587 "enable_quickack": false, 00:13:55.587 "enable_placement_id": 0, 00:13:55.587 "enable_zerocopy_send_server": true, 00:13:55.587 "enable_zerocopy_send_client": false, 00:13:55.587 "zerocopy_threshold": 0, 00:13:55.587 "tls_version": 0, 00:13:55.587 "enable_ktls": false 00:13:55.587 } 00:13:55.587 }, 00:13:55.587 { 00:13:55.587 "method": "sock_impl_set_options", 00:13:55.587 "params": { 00:13:55.587 "impl_name": "uring", 00:13:55.587 "recv_buf_size": 2097152, 00:13:55.587 "send_buf_size": 2097152, 00:13:55.587 "enable_recv_pipe": true, 00:13:55.587 "enable_quickack": false, 00:13:55.587 "enable_placement_id": 0, 00:13:55.587 "enable_zerocopy_send_server": false, 00:13:55.587 "enable_zerocopy_send_client": false, 00:13:55.587 "zerocopy_threshold": 0, 00:13:55.587 "tls_version": 0, 00:13:55.587 "enable_ktls": false 00:13:55.587 } 00:13:55.587 } 00:13:55.587 ] 00:13:55.587 }, 00:13:55.587 { 00:13:55.587 "subsystem": "vmd", 00:13:55.587 "config": [] 00:13:55.587 }, 00:13:55.587 { 00:13:55.587 "subsystem": "accel", 00:13:55.587 "config": [ 00:13:55.587 { 00:13:55.587 "method": "accel_set_options", 00:13:55.587 "params": { 00:13:55.587 "small_cache_size": 128, 00:13:55.587 "large_cache_size": 16, 00:13:55.587 "task_count": 2048, 00:13:55.587 "sequence_count": 2048, 00:13:55.587 "buf_count": 2048 00:13:55.587 } 00:13:55.587 } 00:13:55.587 ] 00:13:55.587 }, 00:13:55.587 { 00:13:55.587 "subsystem": "bdev", 00:13:55.587 "config": [ 00:13:55.587 { 00:13:55.587 "method": "bdev_set_options", 00:13:55.587 "params": { 00:13:55.587 "bdev_io_pool_size": 65535, 00:13:55.587 "bdev_io_cache_size": 256, 00:13:55.587 "bdev_auto_examine": true, 00:13:55.587 "iobuf_small_cache_size": 128, 00:13:55.587 "iobuf_large_cache_size": 16 00:13:55.587 } 00:13:55.587 }, 00:13:55.587 { 00:13:55.587 "method": "bdev_raid_set_options", 00:13:55.587 "params": { 00:13:55.587 "process_window_size_kb": 1024, 00:13:55.587 "process_max_bandwidth_mb_sec": 0 00:13:55.587 } 00:13:55.587 }, 00:13:55.587 { 00:13:55.587 "method": "bdev_iscsi_set_options", 00:13:55.587 "params": { 00:13:55.587 "timeout_sec": 30 00:13:55.587 } 00:13:55.587 }, 00:13:55.588 { 00:13:55.588 "method": "bdev_nvme_set_options", 00:13:55.588 "params": { 00:13:55.588 "action_on_timeout": "none", 00:13:55.588 "timeout_us": 0, 00:13:55.588 "timeout_admin_us": 0, 00:13:55.588 "keep_alive_timeout_ms": 10000, 00:13:55.588 "arbitration_burst": 0, 00:13:55.588 "low_priority_weight": 0, 00:13:55.588 "medium_priority_weight": 0, 00:13:55.588 "high_priority_weight": 0, 00:13:55.588 "nvme_adminq_poll_period_us": 10000, 00:13:55.588 "nvme_ioq_poll_period_us": 0, 00:13:55.588 "io_queue_requests": 512, 00:13:55.588 "delay_cmd_submit": true, 00:13:55.588 "transport_retry_count": 4, 00:13:55.588 "bdev_retry_count": 3, 00:13:55.588 "transport_ack_timeout": 0, 00:13:55.588 "ctrlr_loss_timeout_sec": 0, 00:13:55.588 "reconnect_delay_sec": 0, 00:13:55.588 "fast_io_fail_timeout_sec": 0, 00:13:55.588 "disable_auto_failback": false, 00:13:55.588 "generate_uuids": false, 00:13:55.588 "transport_tos": 0, 00:13:55.588 "nvme_error_stat": false, 00:13:55.588 "rdma_srq_size": 0, 00:13:55.588 "io_path_stat": false, 00:13:55.588 "allow_accel_sequence": false, 00:13:55.588 "rdma_max_cq_size": 0, 00:13:55.588 "rdma_cm_event_timeout_ms": 0, 00:13:55.588 "dhchap_digests": [ 00:13:55.588 "sha256", 00:13:55.588 "sha384", 00:13:55.588 "sha512" 00:13:55.588 ], 00:13:55.588 "dhchap_dhgroups": [ 00:13:55.588 "null", 00:13:55.588 "ffdhe2048", 00:13:55.588 "ffdhe3072", 00:13:55.588 "ffdhe4096", 00:13:55.588 "ffdhe6144", 00:13:55.588 "ffdhe8192" 00:13:55.588 ] 00:13:55.588 } 00:13:55.588 }, 00:13:55.588 { 00:13:55.588 "method": "bdev_nvme_attach_controller", 00:13:55.588 "params": { 00:13:55.588 "name": "TLSTEST", 00:13:55.588 "trtype": "TCP", 00:13:55.588 "adrfam": "IPv4", 00:13:55.588 "traddr": "10.0.0.3", 00:13:55.588 "trsvcid": "4420", 00:13:55.588 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:55.588 "prchk_reftag": false, 00:13:55.588 "prchk_guard": false, 00:13:55.588 "ctrlr_loss_timeout_sec": 0, 00:13:55.588 "reconnect_delay_sec": 0, 00:13:55.588 "fast_io_fail_timeout_sec": 0, 00:13:55.588 "psk": "key0", 00:13:55.588 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:55.588 "hdgst": false, 00:13:55.588 "ddgst": false, 00:13:55.588 "multipath": "multipath" 00:13:55.588 } 00:13:55.588 }, 00:13:55.588 { 00:13:55.588 "method": "bdev_nvme_set_hotplug", 00:13:55.588 "params": { 00:13:55.588 "period_us": 100000, 00:13:55.588 "enable": false 00:13:55.588 } 00:13:55.588 }, 00:13:55.588 { 00:13:55.588 "method": "bdev_wait_for_examine" 00:13:55.588 } 00:13:55.588 ] 00:13:55.588 }, 00:13:55.588 { 00:13:55.588 "subsystem": "nbd", 00:13:55.588 "config": [] 00:13:55.588 } 00:13:55.588 ] 00:13:55.588 }' 00:13:55.588 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 72097 00:13:55.588 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 72097 ']' 00:13:55.588 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 72097 00:13:55.588 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:13:55.588 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:55.588 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72097 00:13:55.588 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:13:55.588 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:13:55.588 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72097' 00:13:55.588 killing process with pid 72097 00:13:55.588 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 72097 00:13:55.588 Received shutdown signal, test time was about 10.000000 seconds 00:13:55.588 00:13:55.588 Latency(us) 00:13:55.588 [2024-11-15T10:31:56.441Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:55.588 [2024-11-15T10:31:56.441Z] =================================================================================================================== 00:13:55.588 [2024-11-15T10:31:56.441Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:55.588 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 72097 00:13:55.847 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 72049 00:13:55.847 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 72049 ']' 00:13:55.847 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 72049 00:13:55.847 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:13:55.847 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:55.847 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72049 00:13:55.847 killing process with pid 72049 00:13:55.847 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:13:55.847 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:13:55.847 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72049' 00:13:55.847 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 72049 00:13:55.847 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 72049 00:13:56.106 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:13:56.106 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:56.106 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:56.106 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:13:56.106 "subsystems": [ 00:13:56.106 { 00:13:56.106 "subsystem": "keyring", 00:13:56.106 "config": [ 00:13:56.106 { 00:13:56.106 "method": "keyring_file_add_key", 00:13:56.106 "params": { 00:13:56.106 "name": "key0", 00:13:56.106 "path": "/tmp/tmp.FC9YzIT312" 00:13:56.106 } 00:13:56.106 } 00:13:56.106 ] 00:13:56.106 }, 00:13:56.106 { 00:13:56.106 "subsystem": "iobuf", 00:13:56.106 "config": [ 00:13:56.106 { 00:13:56.106 "method": "iobuf_set_options", 00:13:56.106 "params": { 00:13:56.106 "small_pool_count": 8192, 00:13:56.106 "large_pool_count": 1024, 00:13:56.106 "small_bufsize": 8192, 00:13:56.106 "large_bufsize": 135168, 00:13:56.106 "enable_numa": false 00:13:56.106 } 00:13:56.106 } 00:13:56.106 ] 00:13:56.106 }, 00:13:56.106 { 00:13:56.106 "subsystem": "sock", 00:13:56.106 "config": [ 00:13:56.106 { 00:13:56.106 "method": "sock_set_default_impl", 00:13:56.106 "params": { 00:13:56.106 "impl_name": "uring" 00:13:56.106 } 00:13:56.106 }, 00:13:56.106 { 00:13:56.106 "method": "sock_impl_set_options", 00:13:56.106 "params": { 00:13:56.106 "impl_name": "ssl", 00:13:56.106 "recv_buf_size": 4096, 00:13:56.106 "send_buf_size": 4096, 00:13:56.106 "enable_recv_pipe": true, 00:13:56.106 "enable_quickack": false, 00:13:56.106 "enable_placement_id": 0, 00:13:56.106 "enable_zerocopy_send_server": true, 00:13:56.106 "enable_zerocopy_send_client": false, 00:13:56.106 "zerocopy_threshold": 0, 00:13:56.106 "tls_version": 0, 00:13:56.106 "enable_ktls": false 00:13:56.106 } 00:13:56.106 }, 00:13:56.106 { 00:13:56.106 "method": "sock_impl_set_options", 00:13:56.106 "params": { 00:13:56.106 "impl_name": "posix", 00:13:56.106 "recv_buf_size": 2097152, 00:13:56.106 "send_buf_size": 2097152, 00:13:56.106 "enable_recv_pipe": true, 00:13:56.106 "enable_quickack": false, 00:13:56.106 "enable_placement_id": 0, 00:13:56.106 "enable_zerocopy_send_server": true, 00:13:56.106 "enable_zerocopy_send_client": false, 00:13:56.106 "zerocopy_threshold": 0, 00:13:56.106 "tls_version": 0, 00:13:56.106 "enable_ktls": false 00:13:56.106 } 00:13:56.106 }, 00:13:56.106 { 00:13:56.106 "method": "sock_impl_set_options", 00:13:56.106 "params": { 00:13:56.106 "impl_name": "uring", 00:13:56.106 "recv_buf_size": 2097152, 00:13:56.106 "send_buf_size": 2097152, 00:13:56.106 "enable_recv_pipe": true, 00:13:56.106 "enable_quickack": false, 00:13:56.106 "enable_placement_id": 0, 00:13:56.106 "enable_zerocopy_send_server": false, 00:13:56.106 "enable_zerocopy_send_client": false, 00:13:56.106 "zerocopy_threshold": 0, 00:13:56.106 "tls_version": 0, 00:13:56.106 "enable_ktls": false 00:13:56.106 } 00:13:56.106 } 00:13:56.106 ] 00:13:56.106 }, 00:13:56.106 { 00:13:56.106 "subsystem": "vmd", 00:13:56.106 "config": [] 00:13:56.106 }, 00:13:56.106 { 00:13:56.106 "subsystem": "accel", 00:13:56.106 "config": [ 00:13:56.106 { 00:13:56.106 "method": "accel_set_options", 00:13:56.106 "params": { 00:13:56.107 "small_cache_size": 128, 00:13:56.107 "large_cache_size": 16, 00:13:56.107 "task_count": 2048, 00:13:56.107 "sequence_count": 2048, 00:13:56.107 "buf_count": 2048 00:13:56.107 } 00:13:56.107 } 00:13:56.107 ] 00:13:56.107 }, 00:13:56.107 { 00:13:56.107 "subsystem": "bdev", 00:13:56.107 "config": [ 00:13:56.107 { 00:13:56.107 "method": "bdev_set_options", 00:13:56.107 "params": { 00:13:56.107 "bdev_io_pool_size": 65535, 00:13:56.107 "bdev_io_cache_size": 256, 00:13:56.107 "bdev_auto_examine": true, 00:13:56.107 "iobuf_small_cache_size": 128, 00:13:56.107 "iobuf_large_cache_size": 16 00:13:56.107 } 00:13:56.107 }, 00:13:56.107 { 00:13:56.107 "method": "bdev_raid_set_options", 00:13:56.107 "params": { 00:13:56.107 "process_window_size_kb": 1024, 00:13:56.107 "process_max_bandwidth_mb_sec": 0 00:13:56.107 } 00:13:56.107 }, 00:13:56.107 { 00:13:56.107 "method": "bdev_iscsi_set_options", 00:13:56.107 "params": { 00:13:56.107 "timeout_sec": 30 00:13:56.107 } 00:13:56.107 }, 00:13:56.107 { 00:13:56.107 "method": "bdev_nvme_set_options", 00:13:56.107 "params": { 00:13:56.107 "action_on_timeout": "none", 00:13:56.107 "timeout_us": 0, 00:13:56.107 "timeout_admin_us": 0, 00:13:56.107 "keep_alive_timeout_ms": 10000, 00:13:56.107 "arbitration_burst": 0, 00:13:56.107 "low_priority_weight": 0, 00:13:56.107 "medium_priority_weight": 0, 00:13:56.107 "high_priority_weight": 0, 00:13:56.107 "nvme_adminq_poll_period_us": 10000, 00:13:56.107 "nvme_ioq_poll_period_us": 0, 00:13:56.107 "io_queue_requests": 0, 00:13:56.107 "delay_cmd_submit": true, 00:13:56.107 "transport_retry_count": 4, 00:13:56.107 "bdev_retry_count": 3, 00:13:56.107 "transport_ack_timeout": 0, 00:13:56.107 "ctrlr_loss_timeout_sec": 0, 00:13:56.107 "reconnect_delay_sec": 0, 00:13:56.107 "fast_io_fail_timeout_sec": 0, 00:13:56.107 "disable_auto_failback": false, 00:13:56.107 "generate_uuids": false, 00:13:56.107 "transport_tos": 0, 00:13:56.107 "nvme_error_stat": false, 00:13:56.107 "rdma_srq_size": 0, 00:13:56.107 "io_path_stat": false, 00:13:56.107 "allow_accel_sequence": false, 00:13:56.107 "rdma_max_cq_size": 0, 00:13:56.107 "rdma_cm_event_timeout_ms": 0, 00:13:56.107 "dhchap_digests": [ 00:13:56.107 "sha256", 00:13:56.107 "sha384", 00:13:56.107 "sha512" 00:13:56.107 ], 00:13:56.107 "dhchap_dhgroups": [ 00:13:56.107 "null", 00:13:56.107 "ffdhe2048", 00:13:56.107 "ffdhe3072", 00:13:56.107 "ffdhe4096", 00:13:56.107 "ffdhe6144", 00:13:56.107 "ffdhe8192" 00:13:56.107 ] 00:13:56.107 } 00:13:56.107 }, 00:13:56.107 { 00:13:56.107 "method": "bdev_nvme_set_hotplug", 00:13:56.107 "params": { 00:13:56.107 "period_us": 100000, 00:13:56.107 "enable": false 00:13:56.107 } 00:13:56.107 }, 00:13:56.107 { 00:13:56.107 "method": "bdev_malloc_create", 00:13:56.107 "params": { 00:13:56.107 "name": "malloc0", 00:13:56.107 "num_blocks": 8192, 00:13:56.107 "block_size": 4096, 00:13:56.107 "physical_block_size": 4096, 00:13:56.107 "uuid": "457fa6b5-ba2f-4d6e-b649-6d209a2b3cd6", 00:13:56.107 "optimal_io_boundary": 0, 00:13:56.107 "md_size": 0, 00:13:56.107 "dif_type": 0, 00:13:56.107 "dif_is_head_of_md": false, 00:13:56.107 "dif_pi_format": 0 00:13:56.107 } 00:13:56.107 }, 00:13:56.107 { 00:13:56.107 "method": "bdev_wait_for_examine" 00:13:56.107 } 00:13:56.107 ] 00:13:56.107 }, 00:13:56.107 { 00:13:56.107 "subsystem": "nbd", 00:13:56.107 "config": [] 00:13:56.107 }, 00:13:56.107 { 00:13:56.107 "subsystem": "scheduler", 00:13:56.107 "config": [ 00:13:56.107 { 00:13:56.107 "method": "framework_set_scheduler", 00:13:56.107 "params": { 00:13:56.107 "name": "static" 00:13:56.107 } 00:13:56.107 } 00:13:56.107 ] 00:13:56.107 }, 00:13:56.107 { 00:13:56.107 "subsystem": "nvmf", 00:13:56.107 "config": [ 00:13:56.107 { 00:13:56.107 "method": "nvmf_set_config", 00:13:56.107 "params": { 00:13:56.107 "discovery_filter": "match_any", 00:13:56.107 "admin_cmd_passthru": { 00:13:56.107 "identify_ctrlr": false 00:13:56.107 }, 00:13:56.107 "dhchap_digests": [ 00:13:56.107 "sha256", 00:13:56.107 "sha384", 00:13:56.107 "sha512" 00:13:56.107 ], 00:13:56.107 "dhchap_dhgroups": [ 00:13:56.107 "null", 00:13:56.107 "ffdhe2048", 00:13:56.107 "ffdhe3072", 00:13:56.107 "ffdhe4096", 00:13:56.107 "ffdhe6144", 00:13:56.107 "ffdhe8192" 00:13:56.107 ] 00:13:56.107 } 00:13:56.107 }, 00:13:56.107 { 00:13:56.107 "method": "nvmf_set_max_subsystems", 00:13:56.107 "params": { 00:13:56.107 "max_subsystems": 1024 00:13:56.107 } 00:13:56.107 }, 00:13:56.107 { 00:13:56.107 "method": "nvmf_set_crdt", 00:13:56.107 "params": { 00:13:56.107 "crdt1": 0, 00:13:56.107 "crdt2": 0, 00:13:56.107 "crdt3": 0 00:13:56.107 } 00:13:56.107 }, 00:13:56.107 { 00:13:56.107 "method": "nvmf_create_transport", 00:13:56.107 "params": { 00:13:56.107 "trtype": "TCP", 00:13:56.107 "max_queue_depth": 128, 00:13:56.107 "max_io_qpairs_per_ctrlr": 127, 00:13:56.107 "in_capsule_data_size": 4096, 00:13:56.107 "max_io_size": 131072, 00:13:56.107 "io_unit_size": 131072, 00:13:56.107 "max_aq_depth": 128, 00:13:56.107 "num_shared_buffers": 511, 00:13:56.107 "buf_cache_size": 4294967295, 00:13:56.107 "dif_insert_or_strip": false, 00:13:56.107 "zcopy": false, 00:13:56.107 "c2h_success": false, 00:13:56.107 "sock_priority": 0, 00:13:56.107 "abort_timeout_sec": 1, 00:13:56.107 "ack_timeout": 0, 00:13:56.107 "data_wr_pool_size": 0 00:13:56.107 } 00:13:56.107 }, 00:13:56.107 { 00:13:56.107 "method": "nvmf_create_subsystem", 00:13:56.107 "params": { 00:13:56.107 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:56.107 "allow_any_host": false, 00:13:56.107 "serial_number": "SPDK00000000000001", 00:13:56.108 "model_number": "SPDK bdev Controller", 00:13:56.108 "max_namespaces": 10, 00:13:56.108 "min_cntlid": 1, 00:13:56.108 "max_cntlid": 65519, 00:13:56.108 "ana_reporting": false 00:13:56.108 } 00:13:56.108 }, 00:13:56.108 { 00:13:56.108 "method": "nvmf_subsystem_add_host", 00:13:56.108 "params": { 00:13:56.108 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:56.108 "host": "nqn.2016-06.io.spdk:host1", 00:13:56.108 "psk": "key0" 00:13:56.108 } 00:13:56.108 }, 00:13:56.108 { 00:13:56.108 "method": "nvmf_subsystem_add_ns", 00:13:56.108 "params": { 00:13:56.108 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:56.108 "namespace": { 00:13:56.108 "nsid": 1, 00:13:56.108 "bdev_name": "malloc0", 00:13:56.108 "nguid": "457FA6B5BA2F4D6EB6496D209A2B3CD6", 00:13:56.108 "uuid": "457fa6b5-ba2f-4d6e-b649-6d209a2b3cd6", 00:13:56.108 "no_auto_visible": false 00:13:56.108 } 00:13:56.108 } 00:13:56.108 }, 00:13:56.108 { 00:13:56.108 "method": "nvmf_subsystem_add_listener", 00:13:56.108 "params": { 00:13:56.108 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:56.108 "listen_address": { 00:13:56.108 "trtype": "TCP", 00:13:56.108 "adrfam": "IPv4", 00:13:56.108 "traddr": "10.0.0.3", 00:13:56.108 "trsvcid": "4420" 00:13:56.108 }, 00:13:56.108 "secure_channel": true 00:13:56.108 } 00:13:56.108 } 00:13:56.108 ] 00:13:56.108 } 00:13:56.108 ] 00:13:56.108 }' 00:13:56.108 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:56.108 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72152 00:13:56.108 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:13:56.108 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72152 00:13:56.108 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 72152 ']' 00:13:56.108 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:56.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:56.108 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:56.108 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:56.108 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:56.108 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:56.108 [2024-11-15 10:31:56.862178] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:13:56.108 [2024-11-15 10:31:56.862272] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:56.367 [2024-11-15 10:31:57.004723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.367 [2024-11-15 10:31:57.063513] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:56.367 [2024-11-15 10:31:57.063591] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:56.367 [2024-11-15 10:31:57.063603] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:56.367 [2024-11-15 10:31:57.063611] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:56.367 [2024-11-15 10:31:57.063618] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:56.367 [2024-11-15 10:31:57.064085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:56.626 [2024-11-15 10:31:57.230607] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:56.626 [2024-11-15 10:31:57.313284] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:56.626 [2024-11-15 10:31:57.345235] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:56.626 [2024-11-15 10:31:57.345489] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:57.194 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:57.194 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:13:57.194 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:57.194 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:57.194 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:57.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:57.194 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:57.194 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=72186 00:13:57.194 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 72186 /var/tmp/bdevperf.sock 00:13:57.194 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 72186 ']' 00:13:57.194 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:57.194 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:57.194 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:57.194 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:57.194 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:57.194 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:13:57.194 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:13:57.194 "subsystems": [ 00:13:57.194 { 00:13:57.194 "subsystem": "keyring", 00:13:57.194 "config": [ 00:13:57.194 { 00:13:57.194 "method": "keyring_file_add_key", 00:13:57.194 "params": { 00:13:57.194 "name": "key0", 00:13:57.194 "path": "/tmp/tmp.FC9YzIT312" 00:13:57.194 } 00:13:57.194 } 00:13:57.194 ] 00:13:57.194 }, 00:13:57.194 { 00:13:57.194 "subsystem": "iobuf", 00:13:57.194 "config": [ 00:13:57.194 { 00:13:57.194 "method": "iobuf_set_options", 00:13:57.194 "params": { 00:13:57.194 "small_pool_count": 8192, 00:13:57.194 "large_pool_count": 1024, 00:13:57.194 "small_bufsize": 8192, 00:13:57.194 "large_bufsize": 135168, 00:13:57.194 "enable_numa": false 00:13:57.194 } 00:13:57.194 } 00:13:57.194 ] 00:13:57.194 }, 00:13:57.194 { 00:13:57.194 "subsystem": "sock", 00:13:57.194 "config": [ 00:13:57.194 { 00:13:57.194 "method": "sock_set_default_impl", 00:13:57.194 "params": { 00:13:57.194 "impl_name": "uring" 00:13:57.194 } 00:13:57.194 }, 00:13:57.194 { 00:13:57.194 "method": "sock_impl_set_options", 00:13:57.194 "params": { 00:13:57.194 "impl_name": "ssl", 00:13:57.194 "recv_buf_size": 4096, 00:13:57.194 "send_buf_size": 4096, 00:13:57.194 "enable_recv_pipe": true, 00:13:57.194 "enable_quickack": false, 00:13:57.194 "enable_placement_id": 0, 00:13:57.194 "enable_zerocopy_send_server": true, 00:13:57.194 "enable_zerocopy_send_client": false, 00:13:57.194 "zerocopy_threshold": 0, 00:13:57.194 "tls_version": 0, 00:13:57.194 "enable_ktls": false 00:13:57.194 } 00:13:57.194 }, 00:13:57.194 { 00:13:57.194 "method": "sock_impl_set_options", 00:13:57.194 "params": { 00:13:57.195 "impl_name": "posix", 00:13:57.195 "recv_buf_size": 2097152, 00:13:57.195 "send_buf_size": 2097152, 00:13:57.195 "enable_recv_pipe": true, 00:13:57.195 "enable_quickack": false, 00:13:57.195 "enable_placement_id": 0, 00:13:57.195 "enable_zerocopy_send_server": true, 00:13:57.195 "enable_zerocopy_send_client": false, 00:13:57.195 "zerocopy_threshold": 0, 00:13:57.195 "tls_version": 0, 00:13:57.195 "enable_ktls": false 00:13:57.195 } 00:13:57.195 }, 00:13:57.195 { 00:13:57.195 "method": "sock_impl_set_options", 00:13:57.195 "params": { 00:13:57.195 "impl_name": "uring", 00:13:57.195 "recv_buf_size": 2097152, 00:13:57.195 "send_buf_size": 2097152, 00:13:57.195 "enable_recv_pipe": true, 00:13:57.195 "enable_quickack": false, 00:13:57.195 "enable_placement_id": 0, 00:13:57.195 "enable_zerocopy_send_server": false, 00:13:57.195 "enable_zerocopy_send_client": false, 00:13:57.195 "zerocopy_threshold": 0, 00:13:57.195 "tls_version": 0, 00:13:57.195 "enable_ktls": false 00:13:57.195 } 00:13:57.195 } 00:13:57.195 ] 00:13:57.195 }, 00:13:57.195 { 00:13:57.195 "subsystem": "vmd", 00:13:57.195 "config": [] 00:13:57.195 }, 00:13:57.195 { 00:13:57.195 "subsystem": "accel", 00:13:57.195 "config": [ 00:13:57.195 { 00:13:57.195 "method": "accel_set_options", 00:13:57.195 "params": { 00:13:57.195 "small_cache_size": 128, 00:13:57.195 "large_cache_size": 16, 00:13:57.195 "task_count": 2048, 00:13:57.195 "sequence_count": 2048, 00:13:57.195 "buf_count": 2048 00:13:57.195 } 00:13:57.195 } 00:13:57.195 ] 00:13:57.195 }, 00:13:57.195 { 00:13:57.195 "subsystem": "bdev", 00:13:57.195 "config": [ 00:13:57.195 { 00:13:57.195 "method": "bdev_set_options", 00:13:57.195 "params": { 00:13:57.195 "bdev_io_pool_size": 65535, 00:13:57.195 "bdev_io_cache_size": 256, 00:13:57.195 "bdev_auto_examine": true, 00:13:57.195 "iobuf_small_cache_size": 128, 00:13:57.195 "iobuf_large_cache_size": 16 00:13:57.195 } 00:13:57.195 }, 00:13:57.195 { 00:13:57.195 "method": "bdev_raid_set_options", 00:13:57.195 "params": { 00:13:57.195 "process_window_size_kb": 1024, 00:13:57.195 "process_max_bandwidth_mb_sec": 0 00:13:57.195 } 00:13:57.195 }, 00:13:57.195 { 00:13:57.195 "method": "bdev_iscsi_set_options", 00:13:57.195 "params": { 00:13:57.195 "timeout_sec": 30 00:13:57.195 } 00:13:57.195 }, 00:13:57.195 { 00:13:57.195 "method": "bdev_nvme_set_options", 00:13:57.195 "params": { 00:13:57.195 "action_on_timeout": "none", 00:13:57.195 "timeout_us": 0, 00:13:57.195 "timeout_admin_us": 0, 00:13:57.195 "keep_alive_timeout_ms": 10000, 00:13:57.195 "arbitration_burst": 0, 00:13:57.195 "low_priority_weight": 0, 00:13:57.195 "medium_priority_weight": 0, 00:13:57.195 "high_priority_weight": 0, 00:13:57.195 "nvme_adminq_poll_period_us": 10000, 00:13:57.195 "nvme_ioq_poll_period_us": 0, 00:13:57.195 "io_queue_requests": 512, 00:13:57.195 "delay_cmd_submit": true, 00:13:57.195 "transport_retry_count": 4, 00:13:57.195 "bdev_retry_count": 3, 00:13:57.195 "transport_ack_timeout": 0, 00:13:57.195 "ctrlr_loss_timeout_sec": 0, 00:13:57.195 "reconnect_delay_sec": 0, 00:13:57.195 "fast_io_fail_timeout_sec": 0, 00:13:57.195 "disable_auto_failback": false, 00:13:57.195 "generate_uuids": false, 00:13:57.195 "transport_tos": 0, 00:13:57.195 "nvme_error_stat": false, 00:13:57.195 "rdma_srq_size": 0, 00:13:57.195 "io_path_stat": false, 00:13:57.195 "allow_accel_sequence": false, 00:13:57.195 "rdma_max_cq_size": 0, 00:13:57.195 "rdma_cm_event_timeout_ms": 0, 00:13:57.195 "dhchap_digests": [ 00:13:57.195 "sha256", 00:13:57.195 "sha384", 00:13:57.195 "sha512" 00:13:57.195 ], 00:13:57.195 "dhchap_dhgroups": [ 00:13:57.195 "null", 00:13:57.195 "ffdhe2048", 00:13:57.195 "ffdhe3072", 00:13:57.195 "ffdhe4096", 00:13:57.195 "ffdhe6144", 00:13:57.195 "ffdhe8192" 00:13:57.195 ] 00:13:57.195 } 00:13:57.195 }, 00:13:57.195 { 00:13:57.195 "method": "bdev_nvme_attach_controller", 00:13:57.195 "params": { 00:13:57.195 "name": "TLSTEST", 00:13:57.195 "trtype": "TCP", 00:13:57.195 "adrfam": "IPv4", 00:13:57.195 "traddr": "10.0.0.3", 00:13:57.195 "trsvcid": "4420", 00:13:57.195 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:57.195 "prchk_reftag": false, 00:13:57.195 "prchk_guard": false, 00:13:57.195 "ctrlr_loss_timeout_sec": 0, 00:13:57.195 "reconnect_delay_sec": 0, 00:13:57.195 "fast_io_fail_timeout_sec": 0, 00:13:57.195 "psk": "key0", 00:13:57.195 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:57.195 "hdgst": false, 00:13:57.195 "ddgst": false, 00:13:57.195 "multipath": "multipath" 00:13:57.195 } 00:13:57.195 }, 00:13:57.195 { 00:13:57.195 "method": "bdev_nvme_set_hotplug", 00:13:57.195 "params": { 00:13:57.195 "period_us": 100000, 00:13:57.195 "enable": false 00:13:57.195 } 00:13:57.195 }, 00:13:57.195 { 00:13:57.195 "method": "bdev_wait_for_examine" 00:13:57.195 } 00:13:57.195 ] 00:13:57.195 }, 00:13:57.195 { 00:13:57.195 "subsystem": "nbd", 00:13:57.195 "config": [] 00:13:57.195 } 00:13:57.195 ] 00:13:57.195 }' 00:13:57.195 [2024-11-15 10:31:57.934744] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:13:57.195 [2024-11-15 10:31:57.934863] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72186 ] 00:13:57.454 [2024-11-15 10:31:58.085777] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.454 [2024-11-15 10:31:58.155047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:57.454 [2024-11-15 10:31:58.291505] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:57.714 [2024-11-15 10:31:58.344390] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:58.281 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:58.281 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:13:58.281 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:58.539 Running I/O for 10 seconds... 00:14:00.413 3743.00 IOPS, 14.62 MiB/s [2024-11-15T10:32:02.201Z] 3935.00 IOPS, 15.37 MiB/s [2024-11-15T10:32:03.621Z] 3997.00 IOPS, 15.61 MiB/s [2024-11-15T10:32:04.557Z] 4031.50 IOPS, 15.75 MiB/s [2024-11-15T10:32:05.493Z] 4048.60 IOPS, 15.81 MiB/s [2024-11-15T10:32:06.427Z] 4027.83 IOPS, 15.73 MiB/s [2024-11-15T10:32:07.363Z] 4043.00 IOPS, 15.79 MiB/s [2024-11-15T10:32:08.298Z] 4056.38 IOPS, 15.85 MiB/s [2024-11-15T10:32:09.234Z] 4064.33 IOPS, 15.88 MiB/s [2024-11-15T10:32:09.234Z] 4069.20 IOPS, 15.90 MiB/s 00:14:08.381 Latency(us) 00:14:08.381 [2024-11-15T10:32:09.234Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:08.381 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:08.381 Verification LBA range: start 0x0 length 0x2000 00:14:08.381 TLSTESTn1 : 10.02 4075.22 15.92 0.00 0.00 31352.04 5123.72 34793.66 00:14:08.381 [2024-11-15T10:32:09.234Z] =================================================================================================================== 00:14:08.381 [2024-11-15T10:32:09.234Z] Total : 4075.22 15.92 0.00 0.00 31352.04 5123.72 34793.66 00:14:08.381 { 00:14:08.381 "results": [ 00:14:08.381 { 00:14:08.381 "job": "TLSTESTn1", 00:14:08.381 "core_mask": "0x4", 00:14:08.381 "workload": "verify", 00:14:08.381 "status": "finished", 00:14:08.381 "verify_range": { 00:14:08.381 "start": 0, 00:14:08.381 "length": 8192 00:14:08.381 }, 00:14:08.381 "queue_depth": 128, 00:14:08.381 "io_size": 4096, 00:14:08.381 "runtime": 10.01589, 00:14:08.381 "iops": 4075.2244683198396, 00:14:08.381 "mibps": 15.918845579374374, 00:14:08.381 "io_failed": 0, 00:14:08.381 "io_timeout": 0, 00:14:08.381 "avg_latency_us": 31352.039127368942, 00:14:08.381 "min_latency_us": 5123.723636363637, 00:14:08.381 "max_latency_us": 34793.65818181818 00:14:08.381 } 00:14:08.381 ], 00:14:08.381 "core_count": 1 00:14:08.381 } 00:14:08.381 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:08.381 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 72186 00:14:08.381 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 72186 ']' 00:14:08.381 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 72186 00:14:08.640 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:14:08.640 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:08.640 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72186 00:14:08.640 killing process with pid 72186 00:14:08.640 Received shutdown signal, test time was about 10.000000 seconds 00:14:08.640 00:14:08.640 Latency(us) 00:14:08.640 [2024-11-15T10:32:09.493Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:08.640 [2024-11-15T10:32:09.493Z] =================================================================================================================== 00:14:08.640 [2024-11-15T10:32:09.493Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:08.640 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:14:08.640 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:14:08.640 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72186' 00:14:08.640 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 72186 00:14:08.640 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 72186 00:14:08.640 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 72152 00:14:08.640 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 72152 ']' 00:14:08.640 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 72152 00:14:08.640 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:14:08.640 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:08.640 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72152 00:14:08.899 killing process with pid 72152 00:14:08.899 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:14:08.899 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:14:08.899 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72152' 00:14:08.899 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 72152 00:14:08.899 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 72152 00:14:08.899 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:14:08.899 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:08.899 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:08.899 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:08.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:08.899 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72319 00:14:08.899 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72319 00:14:08.899 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:08.899 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 72319 ']' 00:14:08.899 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:08.899 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:08.899 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:08.899 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:08.899 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:09.157 [2024-11-15 10:32:09.765207] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:14:09.157 [2024-11-15 10:32:09.765304] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:09.157 [2024-11-15 10:32:09.910971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:09.157 [2024-11-15 10:32:09.970972] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:09.157 [2024-11-15 10:32:09.971039] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:09.157 [2024-11-15 10:32:09.971069] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:09.157 [2024-11-15 10:32:09.971080] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:09.157 [2024-11-15 10:32:09.971088] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:09.157 [2024-11-15 10:32:09.971483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.415 [2024-11-15 10:32:10.024311] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:09.415 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:09.415 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:14:09.415 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:09.415 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:09.415 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:09.415 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:09.415 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.FC9YzIT312 00:14:09.415 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.FC9YzIT312 00:14:09.415 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:09.673 [2024-11-15 10:32:10.391640] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:09.673 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:09.931 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:10.254 [2024-11-15 10:32:10.927756] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:10.254 [2024-11-15 10:32:10.928027] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:10.254 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:10.512 malloc0 00:14:10.512 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:10.771 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.FC9YzIT312 00:14:11.029 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:11.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:11.288 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=72373 00:14:11.288 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:11.288 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:11.288 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 72373 /var/tmp/bdevperf.sock 00:14:11.288 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 72373 ']' 00:14:11.288 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:11.288 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:11.288 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:11.288 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:11.288 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:11.288 [2024-11-15 10:32:12.131568] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:14:11.288 [2024-11-15 10:32:12.132037] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72373 ] 00:14:11.547 [2024-11-15 10:32:12.287081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.547 [2024-11-15 10:32:12.348688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:11.806 [2024-11-15 10:32:12.402971] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:12.373 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:12.373 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:14:12.373 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.FC9YzIT312 00:14:12.632 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:12.891 [2024-11-15 10:32:13.648295] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:12.891 nvme0n1 00:14:12.891 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:13.149 Running I/O for 1 seconds... 00:14:14.084 3730.00 IOPS, 14.57 MiB/s 00:14:14.084 Latency(us) 00:14:14.084 [2024-11-15T10:32:14.937Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:14.084 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:14.084 Verification LBA range: start 0x0 length 0x2000 00:14:14.084 nvme0n1 : 1.02 3789.37 14.80 0.00 0.00 33441.01 7566.43 24903.68 00:14:14.084 [2024-11-15T10:32:14.937Z] =================================================================================================================== 00:14:14.084 [2024-11-15T10:32:14.937Z] Total : 3789.37 14.80 0.00 0.00 33441.01 7566.43 24903.68 00:14:14.084 { 00:14:14.084 "results": [ 00:14:14.084 { 00:14:14.084 "job": "nvme0n1", 00:14:14.084 "core_mask": "0x2", 00:14:14.084 "workload": "verify", 00:14:14.084 "status": "finished", 00:14:14.084 "verify_range": { 00:14:14.084 "start": 0, 00:14:14.084 "length": 8192 00:14:14.084 }, 00:14:14.084 "queue_depth": 128, 00:14:14.084 "io_size": 4096, 00:14:14.084 "runtime": 1.01811, 00:14:14.084 "iops": 3789.3744290891946, 00:14:14.084 "mibps": 14.802243863629666, 00:14:14.084 "io_failed": 0, 00:14:14.084 "io_timeout": 0, 00:14:14.084 "avg_latency_us": 33441.00721805929, 00:14:14.084 "min_latency_us": 7566.4290909090905, 00:14:14.084 "max_latency_us": 24903.68 00:14:14.084 } 00:14:14.084 ], 00:14:14.084 "core_count": 1 00:14:14.084 } 00:14:14.084 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 72373 00:14:14.084 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 72373 ']' 00:14:14.084 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 72373 00:14:14.084 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:14:14.084 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:14.084 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72373 00:14:14.084 killing process with pid 72373 00:14:14.084 Received shutdown signal, test time was about 1.000000 seconds 00:14:14.084 00:14:14.084 Latency(us) 00:14:14.084 [2024-11-15T10:32:14.937Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:14.084 [2024-11-15T10:32:14.937Z] =================================================================================================================== 00:14:14.084 [2024-11-15T10:32:14.937Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:14.084 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:14:14.084 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:14:14.084 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72373' 00:14:14.084 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 72373 00:14:14.084 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 72373 00:14:14.341 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 72319 00:14:14.341 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 72319 ']' 00:14:14.341 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 72319 00:14:14.341 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:14:14.341 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:14.341 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72319 00:14:14.341 killing process with pid 72319 00:14:14.341 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:14.341 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:14.341 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72319' 00:14:14.341 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 72319 00:14:14.341 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 72319 00:14:14.604 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:14:14.604 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:14.604 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:14.604 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:14.604 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72424 00:14:14.604 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:14.604 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72424 00:14:14.604 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 72424 ']' 00:14:14.604 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:14.604 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:14.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:14.604 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:14.604 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:14.604 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:14.862 [2024-11-15 10:32:15.508841] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:14:14.862 [2024-11-15 10:32:15.509217] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:14.862 [2024-11-15 10:32:15.654468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:15.120 [2024-11-15 10:32:15.732599] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:15.120 [2024-11-15 10:32:15.732698] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:15.120 [2024-11-15 10:32:15.732724] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:15.120 [2024-11-15 10:32:15.732743] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:15.120 [2024-11-15 10:32:15.732759] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:15.120 [2024-11-15 10:32:15.733465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:15.120 [2024-11-15 10:32:15.791850] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:15.686 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:15.686 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:14:15.686 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:15.686 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:15.686 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:15.686 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:15.686 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:14:15.686 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.686 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:15.686 [2024-11-15 10:32:16.525757] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:15.944 malloc0 00:14:15.944 [2024-11-15 10:32:16.558605] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:15.944 [2024-11-15 10:32:16.558931] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:15.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:15.944 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.944 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=72456 00:14:15.944 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 72456 /var/tmp/bdevperf.sock 00:14:15.944 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:15.944 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 72456 ']' 00:14:15.944 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:15.944 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:15.944 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:15.944 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:15.944 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:15.944 [2024-11-15 10:32:16.646933] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:14:15.944 [2024-11-15 10:32:16.647352] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72456 ] 00:14:15.944 [2024-11-15 10:32:16.793161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:16.203 [2024-11-15 10:32:16.857646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:16.203 [2024-11-15 10:32:16.912925] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:16.203 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:16.203 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:14:16.203 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.FC9YzIT312 00:14:16.460 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:16.719 [2024-11-15 10:32:17.553859] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:16.978 nvme0n1 00:14:16.978 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:16.978 Running I/O for 1 seconds... 00:14:18.172 3840.00 IOPS, 15.00 MiB/s 00:14:18.172 Latency(us) 00:14:18.172 [2024-11-15T10:32:19.025Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:18.172 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:18.172 Verification LBA range: start 0x0 length 0x2000 00:14:18.172 nvme0n1 : 1.03 3867.82 15.11 0.00 0.00 32722.18 7089.80 21328.99 00:14:18.172 [2024-11-15T10:32:19.025Z] =================================================================================================================== 00:14:18.172 [2024-11-15T10:32:19.025Z] Total : 3867.82 15.11 0.00 0.00 32722.18 7089.80 21328.99 00:14:18.172 { 00:14:18.172 "results": [ 00:14:18.172 { 00:14:18.172 "job": "nvme0n1", 00:14:18.172 "core_mask": "0x2", 00:14:18.172 "workload": "verify", 00:14:18.172 "status": "finished", 00:14:18.172 "verify_range": { 00:14:18.172 "start": 0, 00:14:18.172 "length": 8192 00:14:18.172 }, 00:14:18.172 "queue_depth": 128, 00:14:18.172 "io_size": 4096, 00:14:18.172 "runtime": 1.0259, 00:14:18.172 "iops": 3867.823374597914, 00:14:18.172 "mibps": 15.108685057023102, 00:14:18.172 "io_failed": 0, 00:14:18.172 "io_timeout": 0, 00:14:18.172 "avg_latency_us": 32722.179002932557, 00:14:18.172 "min_latency_us": 7089.8036363636365, 00:14:18.172 "max_latency_us": 21328.98909090909 00:14:18.172 } 00:14:18.172 ], 00:14:18.172 "core_count": 1 00:14:18.172 } 00:14:18.172 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:14:18.172 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.172 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:18.172 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.172 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:14:18.172 "subsystems": [ 00:14:18.172 { 00:14:18.172 "subsystem": "keyring", 00:14:18.172 "config": [ 00:14:18.172 { 00:14:18.172 "method": "keyring_file_add_key", 00:14:18.172 "params": { 00:14:18.172 "name": "key0", 00:14:18.173 "path": "/tmp/tmp.FC9YzIT312" 00:14:18.173 } 00:14:18.173 } 00:14:18.173 ] 00:14:18.173 }, 00:14:18.173 { 00:14:18.173 "subsystem": "iobuf", 00:14:18.173 "config": [ 00:14:18.173 { 00:14:18.173 "method": "iobuf_set_options", 00:14:18.173 "params": { 00:14:18.173 "small_pool_count": 8192, 00:14:18.173 "large_pool_count": 1024, 00:14:18.173 "small_bufsize": 8192, 00:14:18.173 "large_bufsize": 135168, 00:14:18.173 "enable_numa": false 00:14:18.173 } 00:14:18.173 } 00:14:18.173 ] 00:14:18.173 }, 00:14:18.173 { 00:14:18.173 "subsystem": "sock", 00:14:18.173 "config": [ 00:14:18.173 { 00:14:18.173 "method": "sock_set_default_impl", 00:14:18.173 "params": { 00:14:18.173 "impl_name": "uring" 00:14:18.173 } 00:14:18.173 }, 00:14:18.173 { 00:14:18.173 "method": "sock_impl_set_options", 00:14:18.173 "params": { 00:14:18.173 "impl_name": "ssl", 00:14:18.173 "recv_buf_size": 4096, 00:14:18.173 "send_buf_size": 4096, 00:14:18.173 "enable_recv_pipe": true, 00:14:18.173 "enable_quickack": false, 00:14:18.173 "enable_placement_id": 0, 00:14:18.173 "enable_zerocopy_send_server": true, 00:14:18.173 "enable_zerocopy_send_client": false, 00:14:18.173 "zerocopy_threshold": 0, 00:14:18.173 "tls_version": 0, 00:14:18.173 "enable_ktls": false 00:14:18.173 } 00:14:18.173 }, 00:14:18.173 { 00:14:18.173 "method": "sock_impl_set_options", 00:14:18.173 "params": { 00:14:18.173 "impl_name": "posix", 00:14:18.173 "recv_buf_size": 2097152, 00:14:18.173 "send_buf_size": 2097152, 00:14:18.173 "enable_recv_pipe": true, 00:14:18.173 "enable_quickack": false, 00:14:18.173 "enable_placement_id": 0, 00:14:18.173 "enable_zerocopy_send_server": true, 00:14:18.173 "enable_zerocopy_send_client": false, 00:14:18.173 "zerocopy_threshold": 0, 00:14:18.173 "tls_version": 0, 00:14:18.173 "enable_ktls": false 00:14:18.173 } 00:14:18.173 }, 00:14:18.173 { 00:14:18.173 "method": "sock_impl_set_options", 00:14:18.173 "params": { 00:14:18.173 "impl_name": "uring", 00:14:18.173 "recv_buf_size": 2097152, 00:14:18.173 "send_buf_size": 2097152, 00:14:18.173 "enable_recv_pipe": true, 00:14:18.173 "enable_quickack": false, 00:14:18.173 "enable_placement_id": 0, 00:14:18.173 "enable_zerocopy_send_server": false, 00:14:18.173 "enable_zerocopy_send_client": false, 00:14:18.173 "zerocopy_threshold": 0, 00:14:18.173 "tls_version": 0, 00:14:18.173 "enable_ktls": false 00:14:18.173 } 00:14:18.173 } 00:14:18.173 ] 00:14:18.173 }, 00:14:18.173 { 00:14:18.173 "subsystem": "vmd", 00:14:18.173 "config": [] 00:14:18.173 }, 00:14:18.173 { 00:14:18.173 "subsystem": "accel", 00:14:18.173 "config": [ 00:14:18.173 { 00:14:18.173 "method": "accel_set_options", 00:14:18.173 "params": { 00:14:18.173 "small_cache_size": 128, 00:14:18.173 "large_cache_size": 16, 00:14:18.173 "task_count": 2048, 00:14:18.173 "sequence_count": 2048, 00:14:18.173 "buf_count": 2048 00:14:18.173 } 00:14:18.173 } 00:14:18.173 ] 00:14:18.173 }, 00:14:18.173 { 00:14:18.173 "subsystem": "bdev", 00:14:18.173 "config": [ 00:14:18.173 { 00:14:18.173 "method": "bdev_set_options", 00:14:18.173 "params": { 00:14:18.173 "bdev_io_pool_size": 65535, 00:14:18.173 "bdev_io_cache_size": 256, 00:14:18.173 "bdev_auto_examine": true, 00:14:18.173 "iobuf_small_cache_size": 128, 00:14:18.173 "iobuf_large_cache_size": 16 00:14:18.173 } 00:14:18.173 }, 00:14:18.173 { 00:14:18.173 "method": "bdev_raid_set_options", 00:14:18.173 "params": { 00:14:18.173 "process_window_size_kb": 1024, 00:14:18.173 "process_max_bandwidth_mb_sec": 0 00:14:18.173 } 00:14:18.173 }, 00:14:18.173 { 00:14:18.173 "method": "bdev_iscsi_set_options", 00:14:18.173 "params": { 00:14:18.173 "timeout_sec": 30 00:14:18.173 } 00:14:18.173 }, 00:14:18.173 { 00:14:18.173 "method": "bdev_nvme_set_options", 00:14:18.173 "params": { 00:14:18.173 "action_on_timeout": "none", 00:14:18.173 "timeout_us": 0, 00:14:18.173 "timeout_admin_us": 0, 00:14:18.173 "keep_alive_timeout_ms": 10000, 00:14:18.173 "arbitration_burst": 0, 00:14:18.173 "low_priority_weight": 0, 00:14:18.173 "medium_priority_weight": 0, 00:14:18.173 "high_priority_weight": 0, 00:14:18.173 "nvme_adminq_poll_period_us": 10000, 00:14:18.173 "nvme_ioq_poll_period_us": 0, 00:14:18.173 "io_queue_requests": 0, 00:14:18.173 "delay_cmd_submit": true, 00:14:18.173 "transport_retry_count": 4, 00:14:18.173 "bdev_retry_count": 3, 00:14:18.173 "transport_ack_timeout": 0, 00:14:18.173 "ctrlr_loss_timeout_sec": 0, 00:14:18.173 "reconnect_delay_sec": 0, 00:14:18.173 "fast_io_fail_timeout_sec": 0, 00:14:18.173 "disable_auto_failback": false, 00:14:18.173 "generate_uuids": false, 00:14:18.173 "transport_tos": 0, 00:14:18.173 "nvme_error_stat": false, 00:14:18.173 "rdma_srq_size": 0, 00:14:18.173 "io_path_stat": false, 00:14:18.173 "allow_accel_sequence": false, 00:14:18.173 "rdma_max_cq_size": 0, 00:14:18.173 "rdma_cm_event_timeout_ms": 0, 00:14:18.173 "dhchap_digests": [ 00:14:18.173 "sha256", 00:14:18.173 "sha384", 00:14:18.173 "sha512" 00:14:18.173 ], 00:14:18.173 "dhchap_dhgroups": [ 00:14:18.173 "null", 00:14:18.173 "ffdhe2048", 00:14:18.173 "ffdhe3072", 00:14:18.173 "ffdhe4096", 00:14:18.173 "ffdhe6144", 00:14:18.173 "ffdhe8192" 00:14:18.173 ] 00:14:18.173 } 00:14:18.173 }, 00:14:18.173 { 00:14:18.173 "method": "bdev_nvme_set_hotplug", 00:14:18.173 "params": { 00:14:18.173 "period_us": 100000, 00:14:18.173 "enable": false 00:14:18.173 } 00:14:18.173 }, 00:14:18.173 { 00:14:18.173 "method": "bdev_malloc_create", 00:14:18.173 "params": { 00:14:18.173 "name": "malloc0", 00:14:18.173 "num_blocks": 8192, 00:14:18.173 "block_size": 4096, 00:14:18.173 "physical_block_size": 4096, 00:14:18.173 "uuid": "dd0b2998-1622-441c-9ccc-c1191c98caa4", 00:14:18.173 "optimal_io_boundary": 0, 00:14:18.173 "md_size": 0, 00:14:18.173 "dif_type": 0, 00:14:18.173 "dif_is_head_of_md": false, 00:14:18.173 "dif_pi_format": 0 00:14:18.173 } 00:14:18.173 }, 00:14:18.173 { 00:14:18.173 "method": "bdev_wait_for_examine" 00:14:18.173 } 00:14:18.173 ] 00:14:18.173 }, 00:14:18.173 { 00:14:18.173 "subsystem": "nbd", 00:14:18.173 "config": [] 00:14:18.173 }, 00:14:18.173 { 00:14:18.173 "subsystem": "scheduler", 00:14:18.173 "config": [ 00:14:18.173 { 00:14:18.173 "method": "framework_set_scheduler", 00:14:18.173 "params": { 00:14:18.173 "name": "static" 00:14:18.173 } 00:14:18.173 } 00:14:18.173 ] 00:14:18.173 }, 00:14:18.173 { 00:14:18.173 "subsystem": "nvmf", 00:14:18.173 "config": [ 00:14:18.173 { 00:14:18.173 "method": "nvmf_set_config", 00:14:18.173 "params": { 00:14:18.173 "discovery_filter": "match_any", 00:14:18.173 "admin_cmd_passthru": { 00:14:18.173 "identify_ctrlr": false 00:14:18.173 }, 00:14:18.173 "dhchap_digests": [ 00:14:18.173 "sha256", 00:14:18.173 "sha384", 00:14:18.173 "sha512" 00:14:18.173 ], 00:14:18.173 "dhchap_dhgroups": [ 00:14:18.173 "null", 00:14:18.173 "ffdhe2048", 00:14:18.173 "ffdhe3072", 00:14:18.173 "ffdhe4096", 00:14:18.173 "ffdhe6144", 00:14:18.173 "ffdhe8192" 00:14:18.173 ] 00:14:18.173 } 00:14:18.173 }, 00:14:18.173 { 00:14:18.173 "method": "nvmf_set_max_subsystems", 00:14:18.173 "params": { 00:14:18.173 "max_subsystems": 1024 00:14:18.173 } 00:14:18.173 }, 00:14:18.173 { 00:14:18.173 "method": "nvmf_set_crdt", 00:14:18.173 "params": { 00:14:18.173 "crdt1": 0, 00:14:18.173 "crdt2": 0, 00:14:18.173 "crdt3": 0 00:14:18.173 } 00:14:18.173 }, 00:14:18.173 { 00:14:18.173 "method": "nvmf_create_transport", 00:14:18.173 "params": { 00:14:18.173 "trtype": "TCP", 00:14:18.173 "max_queue_depth": 128, 00:14:18.173 "max_io_qpairs_per_ctrlr": 127, 00:14:18.173 "in_capsule_data_size": 4096, 00:14:18.173 "max_io_size": 131072, 00:14:18.173 "io_unit_size": 131072, 00:14:18.173 "max_aq_depth": 128, 00:14:18.173 "num_shared_buffers": 511, 00:14:18.173 "buf_cache_size": 4294967295, 00:14:18.173 "dif_insert_or_strip": false, 00:14:18.173 "zcopy": false, 00:14:18.173 "c2h_success": false, 00:14:18.173 "sock_priority": 0, 00:14:18.173 "abort_timeout_sec": 1, 00:14:18.174 "ack_timeout": 0, 00:14:18.174 "data_wr_pool_size": 0 00:14:18.174 } 00:14:18.174 }, 00:14:18.174 { 00:14:18.174 "method": "nvmf_create_subsystem", 00:14:18.174 "params": { 00:14:18.174 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:18.174 "allow_any_host": false, 00:14:18.174 "serial_number": "00000000000000000000", 00:14:18.174 "model_number": "SPDK bdev Controller", 00:14:18.174 "max_namespaces": 32, 00:14:18.174 "min_cntlid": 1, 00:14:18.174 "max_cntlid": 65519, 00:14:18.174 "ana_reporting": false 00:14:18.174 } 00:14:18.174 }, 00:14:18.174 { 00:14:18.174 "method": "nvmf_subsystem_add_host", 00:14:18.174 "params": { 00:14:18.174 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:18.174 "host": "nqn.2016-06.io.spdk:host1", 00:14:18.174 "psk": "key0" 00:14:18.174 } 00:14:18.174 }, 00:14:18.174 { 00:14:18.174 "method": "nvmf_subsystem_add_ns", 00:14:18.174 "params": { 00:14:18.174 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:18.174 "namespace": { 00:14:18.174 "nsid": 1, 00:14:18.174 "bdev_name": "malloc0", 00:14:18.174 "nguid": "DD0B29981622441C9CCCC1191C98CAA4", 00:14:18.174 "uuid": "dd0b2998-1622-441c-9ccc-c1191c98caa4", 00:14:18.174 "no_auto_visible": false 00:14:18.174 } 00:14:18.174 } 00:14:18.174 }, 00:14:18.174 { 00:14:18.174 "method": "nvmf_subsystem_add_listener", 00:14:18.174 "params": { 00:14:18.174 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:18.174 "listen_address": { 00:14:18.174 "trtype": "TCP", 00:14:18.174 "adrfam": "IPv4", 00:14:18.174 "traddr": "10.0.0.3", 00:14:18.174 "trsvcid": "4420" 00:14:18.174 }, 00:14:18.174 "secure_channel": false, 00:14:18.174 "sock_impl": "ssl" 00:14:18.174 } 00:14:18.174 } 00:14:18.174 ] 00:14:18.174 } 00:14:18.174 ] 00:14:18.174 }' 00:14:18.174 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:18.742 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:14:18.742 "subsystems": [ 00:14:18.742 { 00:14:18.742 "subsystem": "keyring", 00:14:18.742 "config": [ 00:14:18.742 { 00:14:18.742 "method": "keyring_file_add_key", 00:14:18.742 "params": { 00:14:18.742 "name": "key0", 00:14:18.742 "path": "/tmp/tmp.FC9YzIT312" 00:14:18.742 } 00:14:18.742 } 00:14:18.742 ] 00:14:18.742 }, 00:14:18.742 { 00:14:18.742 "subsystem": "iobuf", 00:14:18.742 "config": [ 00:14:18.742 { 00:14:18.742 "method": "iobuf_set_options", 00:14:18.742 "params": { 00:14:18.742 "small_pool_count": 8192, 00:14:18.742 "large_pool_count": 1024, 00:14:18.742 "small_bufsize": 8192, 00:14:18.742 "large_bufsize": 135168, 00:14:18.742 "enable_numa": false 00:14:18.742 } 00:14:18.742 } 00:14:18.742 ] 00:14:18.742 }, 00:14:18.742 { 00:14:18.742 "subsystem": "sock", 00:14:18.742 "config": [ 00:14:18.742 { 00:14:18.742 "method": "sock_set_default_impl", 00:14:18.742 "params": { 00:14:18.742 "impl_name": "uring" 00:14:18.742 } 00:14:18.742 }, 00:14:18.742 { 00:14:18.742 "method": "sock_impl_set_options", 00:14:18.742 "params": { 00:14:18.742 "impl_name": "ssl", 00:14:18.742 "recv_buf_size": 4096, 00:14:18.742 "send_buf_size": 4096, 00:14:18.742 "enable_recv_pipe": true, 00:14:18.742 "enable_quickack": false, 00:14:18.742 "enable_placement_id": 0, 00:14:18.742 "enable_zerocopy_send_server": true, 00:14:18.742 "enable_zerocopy_send_client": false, 00:14:18.743 "zerocopy_threshold": 0, 00:14:18.743 "tls_version": 0, 00:14:18.743 "enable_ktls": false 00:14:18.743 } 00:14:18.743 }, 00:14:18.743 { 00:14:18.743 "method": "sock_impl_set_options", 00:14:18.743 "params": { 00:14:18.743 "impl_name": "posix", 00:14:18.743 "recv_buf_size": 2097152, 00:14:18.743 "send_buf_size": 2097152, 00:14:18.743 "enable_recv_pipe": true, 00:14:18.743 "enable_quickack": false, 00:14:18.743 "enable_placement_id": 0, 00:14:18.743 "enable_zerocopy_send_server": true, 00:14:18.743 "enable_zerocopy_send_client": false, 00:14:18.743 "zerocopy_threshold": 0, 00:14:18.743 "tls_version": 0, 00:14:18.743 "enable_ktls": false 00:14:18.743 } 00:14:18.743 }, 00:14:18.743 { 00:14:18.743 "method": "sock_impl_set_options", 00:14:18.743 "params": { 00:14:18.743 "impl_name": "uring", 00:14:18.743 "recv_buf_size": 2097152, 00:14:18.743 "send_buf_size": 2097152, 00:14:18.743 "enable_recv_pipe": true, 00:14:18.743 "enable_quickack": false, 00:14:18.743 "enable_placement_id": 0, 00:14:18.743 "enable_zerocopy_send_server": false, 00:14:18.743 "enable_zerocopy_send_client": false, 00:14:18.743 "zerocopy_threshold": 0, 00:14:18.743 "tls_version": 0, 00:14:18.743 "enable_ktls": false 00:14:18.743 } 00:14:18.743 } 00:14:18.743 ] 00:14:18.743 }, 00:14:18.743 { 00:14:18.743 "subsystem": "vmd", 00:14:18.743 "config": [] 00:14:18.743 }, 00:14:18.743 { 00:14:18.743 "subsystem": "accel", 00:14:18.743 "config": [ 00:14:18.743 { 00:14:18.743 "method": "accel_set_options", 00:14:18.743 "params": { 00:14:18.743 "small_cache_size": 128, 00:14:18.743 "large_cache_size": 16, 00:14:18.743 "task_count": 2048, 00:14:18.743 "sequence_count": 2048, 00:14:18.743 "buf_count": 2048 00:14:18.743 } 00:14:18.743 } 00:14:18.743 ] 00:14:18.743 }, 00:14:18.743 { 00:14:18.743 "subsystem": "bdev", 00:14:18.743 "config": [ 00:14:18.743 { 00:14:18.743 "method": "bdev_set_options", 00:14:18.743 "params": { 00:14:18.743 "bdev_io_pool_size": 65535, 00:14:18.743 "bdev_io_cache_size": 256, 00:14:18.743 "bdev_auto_examine": true, 00:14:18.743 "iobuf_small_cache_size": 128, 00:14:18.743 "iobuf_large_cache_size": 16 00:14:18.743 } 00:14:18.743 }, 00:14:18.743 { 00:14:18.743 "method": "bdev_raid_set_options", 00:14:18.743 "params": { 00:14:18.743 "process_window_size_kb": 1024, 00:14:18.743 "process_max_bandwidth_mb_sec": 0 00:14:18.743 } 00:14:18.743 }, 00:14:18.743 { 00:14:18.743 "method": "bdev_iscsi_set_options", 00:14:18.743 "params": { 00:14:18.743 "timeout_sec": 30 00:14:18.743 } 00:14:18.743 }, 00:14:18.743 { 00:14:18.743 "method": "bdev_nvme_set_options", 00:14:18.743 "params": { 00:14:18.743 "action_on_timeout": "none", 00:14:18.743 "timeout_us": 0, 00:14:18.743 "timeout_admin_us": 0, 00:14:18.743 "keep_alive_timeout_ms": 10000, 00:14:18.743 "arbitration_burst": 0, 00:14:18.743 "low_priority_weight": 0, 00:14:18.743 "medium_priority_weight": 0, 00:14:18.743 "high_priority_weight": 0, 00:14:18.743 "nvme_adminq_poll_period_us": 10000, 00:14:18.743 "nvme_ioq_poll_period_us": 0, 00:14:18.743 "io_queue_requests": 512, 00:14:18.743 "delay_cmd_submit": true, 00:14:18.743 "transport_retry_count": 4, 00:14:18.743 "bdev_retry_count": 3, 00:14:18.743 "transport_ack_timeout": 0, 00:14:18.743 "ctrlr_loss_timeout_sec": 0, 00:14:18.743 "reconnect_delay_sec": 0, 00:14:18.743 "fast_io_fail_timeout_sec": 0, 00:14:18.743 "disable_auto_failback": false, 00:14:18.743 "generate_uuids": false, 00:14:18.743 "transport_tos": 0, 00:14:18.743 "nvme_error_stat": false, 00:14:18.743 "rdma_srq_size": 0, 00:14:18.743 "io_path_stat": false, 00:14:18.743 "allow_accel_sequence": false, 00:14:18.743 "rdma_max_cq_size": 0, 00:14:18.743 "rdma_cm_event_timeout_ms": 0, 00:14:18.743 "dhchap_digests": [ 00:14:18.743 "sha256", 00:14:18.743 "sha384", 00:14:18.743 "sha512" 00:14:18.743 ], 00:14:18.743 "dhchap_dhgroups": [ 00:14:18.743 "null", 00:14:18.743 "ffdhe2048", 00:14:18.743 "ffdhe3072", 00:14:18.743 "ffdhe4096", 00:14:18.743 "ffdhe6144", 00:14:18.743 "ffdhe8192" 00:14:18.743 ] 00:14:18.743 } 00:14:18.743 }, 00:14:18.743 { 00:14:18.743 "method": "bdev_nvme_attach_controller", 00:14:18.743 "params": { 00:14:18.743 "name": "nvme0", 00:14:18.743 "trtype": "TCP", 00:14:18.743 "adrfam": "IPv4", 00:14:18.743 "traddr": "10.0.0.3", 00:14:18.743 "trsvcid": "4420", 00:14:18.743 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:18.743 "prchk_reftag": false, 00:14:18.743 "prchk_guard": false, 00:14:18.743 "ctrlr_loss_timeout_sec": 0, 00:14:18.743 "reconnect_delay_sec": 0, 00:14:18.743 "fast_io_fail_timeout_sec": 0, 00:14:18.743 "psk": "key0", 00:14:18.743 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:18.743 "hdgst": false, 00:14:18.743 "ddgst": false, 00:14:18.743 "multipath": "multipath" 00:14:18.743 } 00:14:18.743 }, 00:14:18.743 { 00:14:18.743 "method": "bdev_nvme_set_hotplug", 00:14:18.743 "params": { 00:14:18.743 "period_us": 100000, 00:14:18.743 "enable": false 00:14:18.743 } 00:14:18.743 }, 00:14:18.743 { 00:14:18.743 "method": "bdev_enable_histogram", 00:14:18.743 "params": { 00:14:18.743 "name": "nvme0n1", 00:14:18.743 "enable": true 00:14:18.743 } 00:14:18.743 }, 00:14:18.743 { 00:14:18.743 "method": "bdev_wait_for_examine" 00:14:18.743 } 00:14:18.743 ] 00:14:18.743 }, 00:14:18.743 { 00:14:18.743 "subsystem": "nbd", 00:14:18.743 "config": [] 00:14:18.743 } 00:14:18.743 ] 00:14:18.743 }' 00:14:18.743 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 72456 00:14:18.743 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 72456 ']' 00:14:18.743 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 72456 00:14:18.743 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:14:18.743 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:18.743 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72456 00:14:18.743 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:14:18.743 killing process with pid 72456 00:14:18.743 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:14:18.743 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72456' 00:14:18.743 Received shutdown signal, test time was about 1.000000 seconds 00:14:18.743 00:14:18.743 Latency(us) 00:14:18.744 [2024-11-15T10:32:19.597Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:18.744 [2024-11-15T10:32:19.597Z] =================================================================================================================== 00:14:18.744 [2024-11-15T10:32:19.597Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:18.744 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 72456 00:14:18.744 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 72456 00:14:18.744 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 72424 00:14:18.744 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 72424 ']' 00:14:18.744 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 72424 00:14:18.744 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:14:18.744 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:18.744 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72424 00:14:18.744 killing process with pid 72424 00:14:18.744 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:18.744 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:18.744 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72424' 00:14:18.744 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 72424 00:14:18.744 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 72424 00:14:19.003 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:14:19.003 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:19.003 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:19.003 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:19.003 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:14:19.003 "subsystems": [ 00:14:19.003 { 00:14:19.003 "subsystem": "keyring", 00:14:19.003 "config": [ 00:14:19.003 { 00:14:19.003 "method": "keyring_file_add_key", 00:14:19.003 "params": { 00:14:19.003 "name": "key0", 00:14:19.003 "path": "/tmp/tmp.FC9YzIT312" 00:14:19.003 } 00:14:19.003 } 00:14:19.003 ] 00:14:19.003 }, 00:14:19.003 { 00:14:19.003 "subsystem": "iobuf", 00:14:19.003 "config": [ 00:14:19.003 { 00:14:19.003 "method": "iobuf_set_options", 00:14:19.003 "params": { 00:14:19.003 "small_pool_count": 8192, 00:14:19.003 "large_pool_count": 1024, 00:14:19.003 "small_bufsize": 8192, 00:14:19.003 "large_bufsize": 135168, 00:14:19.003 "enable_numa": false 00:14:19.003 } 00:14:19.003 } 00:14:19.003 ] 00:14:19.003 }, 00:14:19.003 { 00:14:19.003 "subsystem": "sock", 00:14:19.003 "config": [ 00:14:19.003 { 00:14:19.003 "method": "sock_set_default_impl", 00:14:19.003 "params": { 00:14:19.003 "impl_name": "uring" 00:14:19.003 } 00:14:19.003 }, 00:14:19.003 { 00:14:19.003 "method": "sock_impl_set_options", 00:14:19.003 "params": { 00:14:19.003 "impl_name": "ssl", 00:14:19.003 "recv_buf_size": 4096, 00:14:19.003 "send_buf_size": 4096, 00:14:19.003 "enable_recv_pipe": true, 00:14:19.003 "enable_quickack": false, 00:14:19.003 "enable_placement_id": 0, 00:14:19.003 "enable_zerocopy_send_server": true, 00:14:19.003 "enable_zerocopy_send_client": false, 00:14:19.003 "zerocopy_threshold": 0, 00:14:19.003 "tls_version": 0, 00:14:19.003 "enable_ktls": false 00:14:19.003 } 00:14:19.003 }, 00:14:19.003 { 00:14:19.003 "method": "sock_impl_set_options", 00:14:19.003 "params": { 00:14:19.003 "impl_name": "posix", 00:14:19.003 "recv_buf_size": 2097152, 00:14:19.003 "send_buf_size": 2097152, 00:14:19.003 "enable_recv_pipe": true, 00:14:19.003 "enable_quickack": false, 00:14:19.003 "enable_placement_id": 0, 00:14:19.003 "enable_zerocopy_send_server": true, 00:14:19.003 "enable_zerocopy_send_client": false, 00:14:19.003 "zerocopy_threshold": 0, 00:14:19.003 "tls_version": 0, 00:14:19.003 "enable_ktls": false 00:14:19.003 } 00:14:19.003 }, 00:14:19.003 { 00:14:19.003 "method": "sock_impl_set_options", 00:14:19.003 "params": { 00:14:19.003 "impl_name": "uring", 00:14:19.003 "recv_buf_size": 2097152, 00:14:19.003 "send_buf_size": 2097152, 00:14:19.003 "enable_recv_pipe": true, 00:14:19.003 "enable_quickack": false, 00:14:19.003 "enable_placement_id": 0, 00:14:19.003 "enable_zerocopy_send_server": false, 00:14:19.003 "enable_zerocopy_send_client": false, 00:14:19.003 "zerocopy_threshold": 0, 00:14:19.003 "tls_version": 0, 00:14:19.003 "enable_ktls": false 00:14:19.003 } 00:14:19.003 } 00:14:19.003 ] 00:14:19.003 }, 00:14:19.003 { 00:14:19.003 "subsystem": "vmd", 00:14:19.003 "config": [] 00:14:19.003 }, 00:14:19.003 { 00:14:19.003 "subsystem": "accel", 00:14:19.003 "config": [ 00:14:19.003 { 00:14:19.003 "method": "accel_set_options", 00:14:19.003 "params": { 00:14:19.003 "small_cache_size": 128, 00:14:19.003 "large_cache_size": 16, 00:14:19.003 "task_count": 2048, 00:14:19.003 "sequence_count": 2048, 00:14:19.003 "buf_count": 2048 00:14:19.003 } 00:14:19.003 } 00:14:19.003 ] 00:14:19.003 }, 00:14:19.003 { 00:14:19.003 "subsystem": "bdev", 00:14:19.003 "config": [ 00:14:19.003 { 00:14:19.003 "method": "bdev_set_options", 00:14:19.003 "params": { 00:14:19.003 "bdev_io_pool_size": 65535, 00:14:19.003 "bdev_io_cache_size": 256, 00:14:19.003 "bdev_auto_examine": true, 00:14:19.003 "iobuf_small_cache_size": 128, 00:14:19.003 "iobuf_large_cache_size": 16 00:14:19.003 } 00:14:19.003 }, 00:14:19.003 { 00:14:19.003 "method": "bdev_raid_set_options", 00:14:19.003 "params": { 00:14:19.003 "process_window_size_kb": 1024, 00:14:19.003 "process_max_bandwidth_mb_sec": 0 00:14:19.003 } 00:14:19.003 }, 00:14:19.003 { 00:14:19.003 "method": "bdev_iscsi_set_options", 00:14:19.003 "params": { 00:14:19.003 "timeout_sec": 30 00:14:19.003 } 00:14:19.003 }, 00:14:19.003 { 00:14:19.003 "method": "bdev_nvme_set_options", 00:14:19.003 "params": { 00:14:19.003 "action_on_timeout": "none", 00:14:19.003 "timeout_us": 0, 00:14:19.003 "timeout_admin_us": 0, 00:14:19.003 "keep_alive_timeout_ms": 10000, 00:14:19.003 "arbitration_burst": 0, 00:14:19.003 "low_priority_weight": 0, 00:14:19.003 "medium_priority_weight": 0, 00:14:19.003 "high_priority_weight": 0, 00:14:19.003 "nvme_adminq_poll_period_us": 10000, 00:14:19.003 "nvme_ioq_poll_period_us": 0, 00:14:19.003 "io_queue_requests": 0, 00:14:19.003 "delay_cmd_submit": true, 00:14:19.003 "transport_retry_count": 4, 00:14:19.003 "bdev_retry_count": 3, 00:14:19.003 "transport_ack_timeout": 0, 00:14:19.003 "ctrlr_loss_timeout_sec": 0, 00:14:19.003 "reconnect_delay_sec": 0, 00:14:19.003 "fast_io_fail_timeout_sec": 0, 00:14:19.003 "disable_auto_failback": false, 00:14:19.003 "generate_uuids": false, 00:14:19.003 "transport_tos": 0, 00:14:19.003 "nvme_error_stat": false, 00:14:19.003 "rdma_srq_size": 0, 00:14:19.003 "io_path_stat": false, 00:14:19.003 "allow_accel_sequence": false, 00:14:19.003 "rdma_max_cq_size": 0, 00:14:19.004 "rdma_cm_event_timeout_ms": 0, 00:14:19.004 "dhchap_digests": [ 00:14:19.004 "sha256", 00:14:19.004 "sha384", 00:14:19.004 "sha512" 00:14:19.004 ], 00:14:19.004 "dhchap_dhgroups": [ 00:14:19.004 "null", 00:14:19.004 "ffdhe2048", 00:14:19.004 "ffdhe3072", 00:14:19.004 "ffdhe4096", 00:14:19.004 "ffdhe6144", 00:14:19.004 "ffdhe8192" 00:14:19.004 ] 00:14:19.004 } 00:14:19.004 }, 00:14:19.004 { 00:14:19.004 "method": "bdev_nvme_set_hotplug", 00:14:19.004 "params": { 00:14:19.004 "period_us": 100000, 00:14:19.004 "enable": false 00:14:19.004 } 00:14:19.004 }, 00:14:19.004 { 00:14:19.004 "method": "bdev_malloc_create", 00:14:19.004 "params": { 00:14:19.004 "name": "malloc0", 00:14:19.004 "num_blocks": 8192, 00:14:19.004 "block_size": 4096, 00:14:19.004 "physical_block_size": 4096, 00:14:19.004 "uuid": "dd0b2998-1622-441c-9ccc-c1191c98caa4", 00:14:19.004 "optimal_io_boundary": 0, 00:14:19.004 "md_size": 0, 00:14:19.004 "dif_type": 0, 00:14:19.004 "dif_is_head_of_md": false, 00:14:19.004 "dif_pi_format": 0 00:14:19.004 } 00:14:19.004 }, 00:14:19.004 { 00:14:19.004 "method": "bdev_wait_for_examine" 00:14:19.004 } 00:14:19.004 ] 00:14:19.004 }, 00:14:19.004 { 00:14:19.004 "subsystem": "nbd", 00:14:19.004 "config": [] 00:14:19.004 }, 00:14:19.004 { 00:14:19.004 "subsystem": "scheduler", 00:14:19.004 "config": [ 00:14:19.004 { 00:14:19.004 "method": "framework_set_scheduler", 00:14:19.004 "params": { 00:14:19.004 "name": "static" 00:14:19.004 } 00:14:19.004 } 00:14:19.004 ] 00:14:19.004 }, 00:14:19.004 { 00:14:19.004 "subsystem": "nvmf", 00:14:19.004 "config": [ 00:14:19.004 { 00:14:19.004 "method": "nvmf_set_config", 00:14:19.004 "params": { 00:14:19.004 "discovery_filter": "match_any", 00:14:19.004 "admin_cmd_passthru": { 00:14:19.004 "identify_ctrlr": false 00:14:19.004 }, 00:14:19.004 "dhchap_digests": [ 00:14:19.004 "sha256", 00:14:19.004 "sha384", 00:14:19.004 "sha512" 00:14:19.004 ], 00:14:19.004 "dhchap_dhgroups": [ 00:14:19.004 "null", 00:14:19.004 "ffdhe2048", 00:14:19.004 "ffdhe3072", 00:14:19.004 "ffdhe4096", 00:14:19.004 "ffdhe6144", 00:14:19.004 "ffdhe8192" 00:14:19.004 ] 00:14:19.004 } 00:14:19.004 }, 00:14:19.004 { 00:14:19.004 "method": "nvmf_set_max_subsystems", 00:14:19.004 "params": { 00:14:19.004 "max_subsystems": 1024 00:14:19.004 } 00:14:19.004 }, 00:14:19.004 { 00:14:19.004 "method": "nvmf_set_crdt", 00:14:19.004 "params": { 00:14:19.004 "crdt1": 0, 00:14:19.004 "crdt2": 0, 00:14:19.004 "crdt3": 0 00:14:19.004 } 00:14:19.004 }, 00:14:19.004 { 00:14:19.004 "method": "nvmf_create_transport", 00:14:19.004 "params": { 00:14:19.004 "trtype": "TCP", 00:14:19.004 "max_queue_depth": 128, 00:14:19.004 "max_io_qpairs_per_ctrlr": 127, 00:14:19.004 "in_capsule_data_size": 4096, 00:14:19.004 "max_io_size": 131072, 00:14:19.004 "io_unit_size": 131072, 00:14:19.004 "max_aq_depth": 128, 00:14:19.004 "num_shared_buffers": 511, 00:14:19.004 "buf_cache_size": 4294967295, 00:14:19.004 "dif_insert_or_strip": false, 00:14:19.004 "zcopy": false, 00:14:19.004 "c2h_success": false, 00:14:19.004 "sock_priority": 0, 00:14:19.004 "abort_timeout_sec": 1, 00:14:19.004 "ack_timeout": 0, 00:14:19.004 "data_wr_pool_size": 0 00:14:19.004 } 00:14:19.004 }, 00:14:19.004 { 00:14:19.004 "method": "nvmf_create_subsystem", 00:14:19.004 "params": { 00:14:19.004 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:19.004 "allow_any_host": false, 00:14:19.004 "serial_number": "00000000000000000000", 00:14:19.004 "model_number": "SPDK bdev Controller", 00:14:19.004 "max_namespaces": 32, 00:14:19.004 "min_cntlid": 1, 00:14:19.004 "max_cntlid": 65519, 00:14:19.004 "ana_reporting": false 00:14:19.004 } 00:14:19.004 }, 00:14:19.004 { 00:14:19.004 "method": "nvmf_subsystem_add_host", 00:14:19.004 "params": { 00:14:19.004 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:19.004 "host": "nqn.2016-06.io.spdk:host1", 00:14:19.004 "psk": "key0" 00:14:19.004 } 00:14:19.004 }, 00:14:19.004 { 00:14:19.004 "method": "nvmf_subsystem_add_ns", 00:14:19.004 "params": { 00:14:19.004 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:19.004 "namespace": { 00:14:19.004 "nsid": 1, 00:14:19.004 "bdev_name": "malloc0", 00:14:19.004 "nguid": "DD0B29981622441C9CCCC1191C98CAA4", 00:14:19.004 "uuid": "dd0b2998-1622-441c-9ccc-c1191c98caa4", 00:14:19.004 "no_auto_visible": false 00:14:19.004 } 00:14:19.004 } 00:14:19.004 }, 00:14:19.004 { 00:14:19.004 "method": "nvmf_subsystem_add_listener", 00:14:19.004 "params": { 00:14:19.004 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:19.004 "listen_address": { 00:14:19.004 "trtype": "TCP", 00:14:19.004 "adrfam": "IPv4", 00:14:19.004 "traddr": "10.0.0.3", 00:14:19.004 "trsvcid": "4420" 00:14:19.004 }, 00:14:19.004 "secure_channel": false, 00:14:19.004 "sock_impl": "ssl" 00:14:19.004 } 00:14:19.004 } 00:14:19.004 ] 00:14:19.004 } 00:14:19.004 ] 00:14:19.004 }' 00:14:19.004 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72509 00:14:19.004 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:14:19.004 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72509 00:14:19.004 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 72509 ']' 00:14:19.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:19.004 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:19.004 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:19.004 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:19.004 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:19.004 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:19.004 [2024-11-15 10:32:19.818120] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:14:19.004 [2024-11-15 10:32:19.818224] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:19.263 [2024-11-15 10:32:19.962526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:19.263 [2024-11-15 10:32:20.023321] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:19.263 [2024-11-15 10:32:20.023381] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:19.263 [2024-11-15 10:32:20.023394] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:19.263 [2024-11-15 10:32:20.023403] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:19.263 [2024-11-15 10:32:20.023410] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:19.263 [2024-11-15 10:32:20.023866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:19.523 [2024-11-15 10:32:20.190235] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:19.523 [2024-11-15 10:32:20.272965] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:19.523 [2024-11-15 10:32:20.304940] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:19.523 [2024-11-15 10:32:20.305221] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:20.092 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:20.092 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:14:20.092 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:20.092 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:20.092 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:20.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:20.092 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:20.092 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=72541 00:14:20.092 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 72541 /var/tmp/bdevperf.sock 00:14:20.092 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 72541 ']' 00:14:20.092 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:20.092 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:20.092 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:14:20.092 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:20.092 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:14:20.092 "subsystems": [ 00:14:20.092 { 00:14:20.092 "subsystem": "keyring", 00:14:20.092 "config": [ 00:14:20.092 { 00:14:20.092 "method": "keyring_file_add_key", 00:14:20.092 "params": { 00:14:20.092 "name": "key0", 00:14:20.092 "path": "/tmp/tmp.FC9YzIT312" 00:14:20.092 } 00:14:20.092 } 00:14:20.092 ] 00:14:20.092 }, 00:14:20.092 { 00:14:20.092 "subsystem": "iobuf", 00:14:20.092 "config": [ 00:14:20.092 { 00:14:20.092 "method": "iobuf_set_options", 00:14:20.092 "params": { 00:14:20.092 "small_pool_count": 8192, 00:14:20.092 "large_pool_count": 1024, 00:14:20.093 "small_bufsize": 8192, 00:14:20.093 "large_bufsize": 135168, 00:14:20.093 "enable_numa": false 00:14:20.093 } 00:14:20.093 } 00:14:20.093 ] 00:14:20.093 }, 00:14:20.093 { 00:14:20.093 "subsystem": "sock", 00:14:20.093 "config": [ 00:14:20.093 { 00:14:20.093 "method": "sock_set_default_impl", 00:14:20.093 "params": { 00:14:20.093 "impl_name": "uring" 00:14:20.093 } 00:14:20.093 }, 00:14:20.093 { 00:14:20.093 "method": "sock_impl_set_options", 00:14:20.093 "params": { 00:14:20.093 "impl_name": "ssl", 00:14:20.093 "recv_buf_size": 4096, 00:14:20.093 "send_buf_size": 4096, 00:14:20.093 "enable_recv_pipe": true, 00:14:20.093 "enable_quickack": false, 00:14:20.093 "enable_placement_id": 0, 00:14:20.093 "enable_zerocopy_send_server": true, 00:14:20.093 "enable_zerocopy_send_client": false, 00:14:20.093 "zerocopy_threshold": 0, 00:14:20.093 "tls_version": 0, 00:14:20.093 "enable_ktls": false 00:14:20.093 } 00:14:20.093 }, 00:14:20.093 { 00:14:20.093 "method": "sock_impl_set_options", 00:14:20.093 "params": { 00:14:20.093 "impl_name": "posix", 00:14:20.093 "recv_buf_size": 2097152, 00:14:20.093 "send_buf_size": 2097152, 00:14:20.093 "enable_recv_pipe": true, 00:14:20.093 "enable_quickack": false, 00:14:20.093 "enable_placement_id": 0, 00:14:20.093 "enable_zerocopy_send_server": true, 00:14:20.093 "enable_zerocopy_send_client": false, 00:14:20.093 "zerocopy_threshold": 0, 00:14:20.093 "tls_version": 0, 00:14:20.093 "enable_ktls": false 00:14:20.093 } 00:14:20.093 }, 00:14:20.093 { 00:14:20.093 "method": "sock_impl_set_options", 00:14:20.093 "params": { 00:14:20.093 "impl_name": "uring", 00:14:20.093 "recv_buf_size": 2097152, 00:14:20.093 "send_buf_size": 2097152, 00:14:20.093 "enable_recv_pipe": true, 00:14:20.093 "enable_quickack": false, 00:14:20.093 "enable_placement_id": 0, 00:14:20.093 "enable_zerocopy_send_server": false, 00:14:20.093 "enable_zerocopy_send_client": false, 00:14:20.093 "zerocopy_threshold": 0, 00:14:20.093 "tls_version": 0, 00:14:20.093 "enable_ktls": false 00:14:20.093 } 00:14:20.093 } 00:14:20.093 ] 00:14:20.093 }, 00:14:20.093 { 00:14:20.093 "subsystem": "vmd", 00:14:20.093 "config": [] 00:14:20.093 }, 00:14:20.093 { 00:14:20.093 "subsystem": "accel", 00:14:20.093 "config": [ 00:14:20.093 { 00:14:20.093 "method": "accel_set_options", 00:14:20.093 "params": { 00:14:20.093 "small_cache_size": 128, 00:14:20.093 "large_cache_size": 16, 00:14:20.093 "task_count": 2048, 00:14:20.093 "sequence_count": 2048, 00:14:20.093 "buf_count": 2048 00:14:20.093 } 00:14:20.093 } 00:14:20.093 ] 00:14:20.093 }, 00:14:20.093 { 00:14:20.093 "subsystem": "bdev", 00:14:20.093 "config": [ 00:14:20.093 { 00:14:20.093 "method": "bdev_set_options", 00:14:20.093 "params": { 00:14:20.093 "bdev_io_pool_size": 65535, 00:14:20.093 "bdev_io_cache_size": 256, 00:14:20.093 "bdev_auto_examine": true, 00:14:20.093 "iobuf_small_cache_size": 128, 00:14:20.093 "iobuf_large_cache_size": 16 00:14:20.093 } 00:14:20.093 }, 00:14:20.093 { 00:14:20.093 "method": "bdev_raid_set_options", 00:14:20.093 "params": { 00:14:20.093 "process_window_size_kb": 1024, 00:14:20.093 "process_max_bandwidth_mb_sec": 0 00:14:20.093 } 00:14:20.093 }, 00:14:20.093 { 00:14:20.093 "method": "bdev_iscsi_set_options", 00:14:20.093 "params": { 00:14:20.093 "timeout_sec": 30 00:14:20.093 } 00:14:20.093 }, 00:14:20.093 { 00:14:20.093 "method": "bdev_nvme_set_options", 00:14:20.093 "params": { 00:14:20.093 "action_on_timeout": "none", 00:14:20.093 "timeout_us": 0, 00:14:20.093 "timeout_admin_us": 0, 00:14:20.093 "keep_alive_timeout_ms": 10000, 00:14:20.093 "arbitration_burst": 0, 00:14:20.093 "low_priority_weight": 0, 00:14:20.093 "medium_priority_weight": 0, 00:14:20.093 "high_priority_weight": 0, 00:14:20.093 "nvme_adminq_poll_period_us": 10000, 00:14:20.093 "nvme_ioq_poll_period_us": 0, 00:14:20.093 "io_queue_requests": 512, 00:14:20.093 "delay_cmd_submit": true, 00:14:20.093 "transport_retry_count": 4, 00:14:20.093 "bdev_retry_count": 3, 00:14:20.093 "transport_ack_timeout": 0, 00:14:20.093 "ctrlr_loss_timeout_sec": 0, 00:14:20.093 "reconnect_delay_sec": 0, 00:14:20.093 "fast_io_fail_timeout_sec": 0, 00:14:20.093 "disable_auto_failback": false, 00:14:20.093 "generate_uuids": false, 00:14:20.093 "transport_tos": 0, 00:14:20.093 "nvme_error_stat": false, 00:14:20.093 "rdma_srq_size": 0, 00:14:20.093 "io_path_stat": false, 00:14:20.093 "allow_accel_sequence": false, 00:14:20.093 "rdma_max_cq_size": 0, 00:14:20.093 "rdma_cm_event_timeout_ms": 0, 00:14:20.093 "dhchap_digests": [ 00:14:20.093 "sha256", 00:14:20.093 "sha384", 00:14:20.093 "sha512" 00:14:20.093 ], 00:14:20.093 "dhchap_dhgroups": [ 00:14:20.093 "null", 00:14:20.093 "ffdhe2048", 00:14:20.093 "ffdhe3072", 00:14:20.093 "ffdhe4096", 00:14:20.093 "ffdhe6144", 00:14:20.093 "ffdhe8192" 00:14:20.093 ] 00:14:20.093 } 00:14:20.093 }, 00:14:20.093 { 00:14:20.093 "method": "bdev_nvme_attach_controller", 00:14:20.093 "params": { 00:14:20.093 "name": "nvme0", 00:14:20.093 "trtype": "TCP", 00:14:20.093 "adrfam": "IPv4", 00:14:20.093 "traddr": "10.0.0.3", 00:14:20.093 "trsvcid": "4420", 00:14:20.093 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:20.093 "prchk_reftag": false, 00:14:20.093 "prchk_guard": false, 00:14:20.093 "ctrlr_loss_timeout_sec": 0, 00:14:20.093 "reconnect_delay_sec": 0, 00:14:20.093 "fast_io_fail_timeout_sec": 0, 00:14:20.093 "psk": "key0", 00:14:20.093 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:20.093 "hdgst": false, 00:14:20.093 "ddgst": false, 00:14:20.093 "multipath": "multipath" 00:14:20.093 } 00:14:20.093 }, 00:14:20.093 { 00:14:20.093 "method": "bdev_nvme_set_hotplug", 00:14:20.093 "params": { 00:14:20.093 "period_us": 100000, 00:14:20.093 "enable": false 00:14:20.093 } 00:14:20.093 }, 00:14:20.093 { 00:14:20.093 "method": "bdev_enable_histogram", 00:14:20.093 "params": { 00:14:20.093 "name": "nvme0n1", 00:14:20.093 "enable": true 00:14:20.093 } 00:14:20.093 }, 00:14:20.093 { 00:14:20.093 "method": "bdev_wait_for_examine" 00:14:20.093 } 00:14:20.093 ] 00:14:20.093 }, 00:14:20.093 { 00:14:20.093 "subsystem": "nbd", 00:14:20.093 "config": [] 00:14:20.093 } 00:14:20.093 ] 00:14:20.093 }' 00:14:20.093 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:20.093 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:20.093 [2024-11-15 10:32:20.936859] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:14:20.093 [2024-11-15 10:32:20.937231] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72541 ] 00:14:20.353 [2024-11-15 10:32:21.086791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:20.353 [2024-11-15 10:32:21.149830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:20.612 [2024-11-15 10:32:21.285198] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:20.612 [2024-11-15 10:32:21.337517] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:21.183 10:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:21.183 10:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:14:21.183 10:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:21.183 10:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:14:21.442 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:21.442 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:21.442 Running I/O for 1 seconds... 00:14:22.820 3841.00 IOPS, 15.00 MiB/s 00:14:22.820 Latency(us) 00:14:22.820 [2024-11-15T10:32:23.673Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:22.820 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:22.820 Verification LBA range: start 0x0 length 0x2000 00:14:22.820 nvme0n1 : 1.02 3901.49 15.24 0.00 0.00 32443.40 207.59 19541.64 00:14:22.820 [2024-11-15T10:32:23.673Z] =================================================================================================================== 00:14:22.820 [2024-11-15T10:32:23.673Z] Total : 3901.49 15.24 0.00 0.00 32443.40 207.59 19541.64 00:14:22.820 { 00:14:22.820 "results": [ 00:14:22.820 { 00:14:22.820 "job": "nvme0n1", 00:14:22.820 "core_mask": "0x2", 00:14:22.820 "workload": "verify", 00:14:22.820 "status": "finished", 00:14:22.820 "verify_range": { 00:14:22.820 "start": 0, 00:14:22.820 "length": 8192 00:14:22.820 }, 00:14:22.820 "queue_depth": 128, 00:14:22.820 "io_size": 4096, 00:14:22.820 "runtime": 1.017303, 00:14:22.820 "iops": 3901.4924756930827, 00:14:22.820 "mibps": 15.240204983176104, 00:14:22.820 "io_failed": 0, 00:14:22.820 "io_timeout": 0, 00:14:22.820 "avg_latency_us": 32443.397113080922, 00:14:22.820 "min_latency_us": 207.59272727272727, 00:14:22.820 "max_latency_us": 19541.643636363635 00:14:22.820 } 00:14:22.820 ], 00:14:22.820 "core_count": 1 00:14:22.820 } 00:14:22.820 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:14:22.820 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:14:22.820 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:14:22.821 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # type=--id 00:14:22.821 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@811 -- # id=0 00:14:22.821 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:14:22.821 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:22.821 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:14:22.821 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:14:22.821 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@822 -- # for n in $shm_files 00:14:22.821 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:22.821 nvmf_trace.0 00:14:22.821 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # return 0 00:14:22.821 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 72541 00:14:22.821 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 72541 ']' 00:14:22.821 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 72541 00:14:22.821 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:14:22.821 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:22.821 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72541 00:14:22.821 killing process with pid 72541 00:14:22.821 Received shutdown signal, test time was about 1.000000 seconds 00:14:22.821 00:14:22.821 Latency(us) 00:14:22.821 [2024-11-15T10:32:23.674Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:22.821 [2024-11-15T10:32:23.674Z] =================================================================================================================== 00:14:22.821 [2024-11-15T10:32:23.674Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:22.821 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:14:22.821 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:14:22.821 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72541' 00:14:22.821 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 72541 00:14:22.821 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 72541 00:14:22.821 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:14:22.821 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:22.821 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:14:23.080 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:23.080 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:14:23.080 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:23.080 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:23.080 rmmod nvme_tcp 00:14:23.080 rmmod nvme_fabrics 00:14:23.080 rmmod nvme_keyring 00:14:23.080 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:23.080 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:14:23.080 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:14:23.080 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 72509 ']' 00:14:23.080 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 72509 00:14:23.080 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 72509 ']' 00:14:23.080 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 72509 00:14:23.080 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:14:23.080 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:23.080 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72509 00:14:23.080 killing process with pid 72509 00:14:23.080 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:23.080 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:23.080 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72509' 00:14:23.080 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 72509 00:14:23.080 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 72509 00:14:23.340 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:23.340 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:23.340 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:23.340 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:14:23.340 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:14:23.340 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:23.340 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:14:23.340 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:23.340 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:23.340 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:23.340 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:23.340 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:23.340 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:23.340 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:23.340 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:23.340 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:23.340 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:23.340 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:23.340 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:23.340 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:23.340 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:23.340 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:23.598 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:23.598 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:23.599 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:23.599 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:23.599 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:14:23.599 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.TARIjO0WbZ /tmp/tmp.M1qiX3EiId /tmp/tmp.FC9YzIT312 00:14:23.599 ************************************ 00:14:23.599 END TEST nvmf_tls 00:14:23.599 ************************************ 00:14:23.599 00:14:23.599 real 1m27.688s 00:14:23.599 user 2m22.958s 00:14:23.599 sys 0m27.466s 00:14:23.599 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:23.599 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:23.599 10:32:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:14:23.599 10:32:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:23.599 10:32:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:23.599 10:32:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:23.599 ************************************ 00:14:23.599 START TEST nvmf_fips 00:14:23.599 ************************************ 00:14:23.599 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:14:23.599 * Looking for test storage... 00:14:23.599 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:14:23.599 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:23.599 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:23.599 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lcov --version 00:14:23.859 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:23.859 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:23.859 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:23.859 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:23.859 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:14:23.859 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:14:23.859 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:14:23.859 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:14:23.859 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:14:23.859 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:14:23.859 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:14:23.859 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:23.859 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:14:23.859 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:14:23.859 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:23.859 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:23.859 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:14:23.859 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:14:23.859 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:23.859 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:14:23.859 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:14:23.859 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:14:23.859 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:14:23.859 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:23.859 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:14:23.859 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:14:23.859 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:23.859 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:23.859 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:14:23.859 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:23.859 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:23.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:23.859 --rc genhtml_branch_coverage=1 00:14:23.859 --rc genhtml_function_coverage=1 00:14:23.859 --rc genhtml_legend=1 00:14:23.859 --rc geninfo_all_blocks=1 00:14:23.859 --rc geninfo_unexecuted_blocks=1 00:14:23.859 00:14:23.859 ' 00:14:23.859 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:23.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:23.859 --rc genhtml_branch_coverage=1 00:14:23.859 --rc genhtml_function_coverage=1 00:14:23.859 --rc genhtml_legend=1 00:14:23.859 --rc geninfo_all_blocks=1 00:14:23.859 --rc geninfo_unexecuted_blocks=1 00:14:23.859 00:14:23.859 ' 00:14:23.859 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:23.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:23.859 --rc genhtml_branch_coverage=1 00:14:23.859 --rc genhtml_function_coverage=1 00:14:23.860 --rc genhtml_legend=1 00:14:23.860 --rc geninfo_all_blocks=1 00:14:23.860 --rc geninfo_unexecuted_blocks=1 00:14:23.860 00:14:23.860 ' 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:23.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:23.860 --rc genhtml_branch_coverage=1 00:14:23.860 --rc genhtml_function_coverage=1 00:14:23.860 --rc genhtml_legend=1 00:14:23.860 --rc geninfo_all_blocks=1 00:14:23.860 --rc geninfo_unexecuted_blocks=1 00:14:23.860 00:14:23.860 ' 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:23.860 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:14:23.860 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:14:23.861 Error setting digest 00:14:23.861 40E23988417F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:14:23.861 40E23988417F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:23.861 Cannot find device "nvmf_init_br" 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:23.861 Cannot find device "nvmf_init_br2" 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:23.861 Cannot find device "nvmf_tgt_br" 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:14:23.861 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:24.119 Cannot find device "nvmf_tgt_br2" 00:14:24.119 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:14:24.119 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:24.119 Cannot find device "nvmf_init_br" 00:14:24.119 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:14:24.119 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:24.119 Cannot find device "nvmf_init_br2" 00:14:24.119 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:14:24.119 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:24.119 Cannot find device "nvmf_tgt_br" 00:14:24.119 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:14:24.119 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:24.119 Cannot find device "nvmf_tgt_br2" 00:14:24.119 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:14:24.119 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:24.119 Cannot find device "nvmf_br" 00:14:24.119 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:14:24.119 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:24.119 Cannot find device "nvmf_init_if" 00:14:24.119 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:14:24.119 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:24.119 Cannot find device "nvmf_init_if2" 00:14:24.119 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:14:24.119 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:24.119 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:24.119 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:14:24.119 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:24.119 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:24.119 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:14:24.119 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:24.119 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:24.119 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:24.119 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:24.119 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:24.119 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:24.119 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:24.119 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:24.119 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:24.119 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:24.119 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:24.119 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:24.119 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:24.119 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:24.119 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:24.119 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:24.119 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:24.119 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:24.119 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:24.119 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:24.378 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:24.378 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:24.378 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:24.378 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:24.378 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:24.378 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:24.378 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:24.378 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:24.378 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:24.378 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:24.378 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:24.378 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:24.378 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:24.378 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:24.378 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.096 ms 00:14:24.378 00:14:24.378 --- 10.0.0.3 ping statistics --- 00:14:24.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:24.378 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:14:24.378 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:24.378 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:24.378 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:14:24.378 00:14:24.378 --- 10.0.0.4 ping statistics --- 00:14:24.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:24.378 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:14:24.378 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:24.378 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:24.378 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:14:24.378 00:14:24.378 --- 10.0.0.1 ping statistics --- 00:14:24.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:24.378 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:14:24.378 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:24.378 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:24.378 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:14:24.378 00:14:24.378 --- 10.0.0.2 ping statistics --- 00:14:24.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:24.378 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:14:24.378 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:24.378 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:14:24.378 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:24.378 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:24.378 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:24.378 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:24.378 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:24.378 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:24.378 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:24.378 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:14:24.378 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:24.378 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:24.378 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:24.378 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=72854 00:14:24.378 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:24.378 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 72854 00:14:24.378 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 72854 ']' 00:14:24.378 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:24.378 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:24.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:24.378 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:24.378 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:24.378 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:24.378 [2024-11-15 10:32:25.212095] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:14:24.378 [2024-11-15 10:32:25.212379] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:24.636 [2024-11-15 10:32:25.367343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:24.636 [2024-11-15 10:32:25.433793] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:24.636 [2024-11-15 10:32:25.433865] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:24.636 [2024-11-15 10:32:25.433880] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:24.636 [2024-11-15 10:32:25.433891] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:24.636 [2024-11-15 10:32:25.433900] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:24.636 [2024-11-15 10:32:25.434369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:24.894 [2024-11-15 10:32:25.495962] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:25.461 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:25.461 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:14:25.461 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:25.461 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:25.461 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:25.461 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:25.461 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:14:25.461 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:14:25.461 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:14:25.461 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.uOA 00:14:25.461 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:14:25.461 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.uOA 00:14:25.461 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.uOA 00:14:25.461 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.uOA 00:14:25.461 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:25.721 [2024-11-15 10:32:26.525427] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:25.721 [2024-11-15 10:32:26.541395] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:25.721 [2024-11-15 10:32:26.541709] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:25.991 malloc0 00:14:25.991 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:25.991 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=72901 00:14:25.991 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:25.991 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 72901 /var/tmp/bdevperf.sock 00:14:25.991 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 72901 ']' 00:14:25.991 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:25.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:25.991 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:25.991 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:25.991 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:25.991 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:25.991 [2024-11-15 10:32:26.695482] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:14:25.991 [2024-11-15 10:32:26.695923] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72901 ] 00:14:26.255 [2024-11-15 10:32:26.844724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.255 [2024-11-15 10:32:26.907173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:26.255 [2024-11-15 10:32:26.961198] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:26.255 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:26.255 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:14:26.255 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.uOA 00:14:26.513 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:26.771 [2024-11-15 10:32:27.507085] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:26.771 TLSTESTn1 00:14:26.771 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:27.030 Running I/O for 10 seconds... 00:14:28.901 3840.00 IOPS, 15.00 MiB/s [2024-11-15T10:32:30.727Z] 3776.00 IOPS, 14.75 MiB/s [2024-11-15T10:32:32.106Z] 3797.33 IOPS, 14.83 MiB/s [2024-11-15T10:32:33.046Z] 3808.00 IOPS, 14.88 MiB/s [2024-11-15T10:32:33.981Z] 3814.40 IOPS, 14.90 MiB/s [2024-11-15T10:32:34.924Z] 3818.67 IOPS, 14.92 MiB/s [2024-11-15T10:32:35.861Z] 3833.71 IOPS, 14.98 MiB/s [2024-11-15T10:32:36.797Z] 3828.25 IOPS, 14.95 MiB/s [2024-11-15T10:32:37.732Z] 3825.89 IOPS, 14.94 MiB/s [2024-11-15T10:32:37.732Z] 3814.40 IOPS, 14.90 MiB/s 00:14:36.879 Latency(us) 00:14:36.879 [2024-11-15T10:32:37.732Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:36.879 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:36.879 Verification LBA range: start 0x0 length 0x2000 00:14:36.879 TLSTESTn1 : 10.02 3819.65 14.92 0.00 0.00 33443.61 7506.85 28001.75 00:14:36.879 [2024-11-15T10:32:37.732Z] =================================================================================================================== 00:14:36.879 [2024-11-15T10:32:37.732Z] Total : 3819.65 14.92 0.00 0.00 33443.61 7506.85 28001.75 00:14:36.879 { 00:14:36.879 "results": [ 00:14:36.879 { 00:14:36.879 "job": "TLSTESTn1", 00:14:36.879 "core_mask": "0x4", 00:14:36.879 "workload": "verify", 00:14:36.879 "status": "finished", 00:14:36.879 "verify_range": { 00:14:36.879 "start": 0, 00:14:36.879 "length": 8192 00:14:36.879 }, 00:14:36.879 "queue_depth": 128, 00:14:36.879 "io_size": 4096, 00:14:36.879 "runtime": 10.019768, 00:14:36.879 "iops": 3819.6493172297005, 00:14:36.879 "mibps": 14.920505145428518, 00:14:36.879 "io_failed": 0, 00:14:36.879 "io_timeout": 0, 00:14:36.879 "avg_latency_us": 33443.60571602311, 00:14:36.879 "min_latency_us": 7506.850909090909, 00:14:36.879 "max_latency_us": 28001.745454545453 00:14:36.879 } 00:14:36.879 ], 00:14:36.879 "core_count": 1 00:14:36.879 } 00:14:37.138 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:14:37.138 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:14:37.138 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # type=--id 00:14:37.138 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@811 -- # id=0 00:14:37.138 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:14:37.138 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:37.138 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:14:37.138 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:14:37.138 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@822 -- # for n in $shm_files 00:14:37.139 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:37.139 nvmf_trace.0 00:14:37.139 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # return 0 00:14:37.139 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 72901 00:14:37.139 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 72901 ']' 00:14:37.139 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 72901 00:14:37.139 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:14:37.139 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:37.139 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72901 00:14:37.139 killing process with pid 72901 00:14:37.139 Received shutdown signal, test time was about 10.000000 seconds 00:14:37.139 00:14:37.139 Latency(us) 00:14:37.139 [2024-11-15T10:32:37.992Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:37.139 [2024-11-15T10:32:37.992Z] =================================================================================================================== 00:14:37.139 [2024-11-15T10:32:37.992Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:37.139 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:14:37.139 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:14:37.139 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72901' 00:14:37.139 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 72901 00:14:37.139 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 72901 00:14:37.397 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:14:37.397 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:37.397 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:14:37.397 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:37.397 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:14:37.397 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:37.397 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:37.397 rmmod nvme_tcp 00:14:37.397 rmmod nvme_fabrics 00:14:37.397 rmmod nvme_keyring 00:14:37.397 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:37.397 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:14:37.397 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:14:37.397 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 72854 ']' 00:14:37.397 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 72854 00:14:37.397 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 72854 ']' 00:14:37.397 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 72854 00:14:37.397 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:14:37.397 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:37.397 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72854 00:14:37.397 killing process with pid 72854 00:14:37.397 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:14:37.397 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:14:37.397 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72854' 00:14:37.397 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 72854 00:14:37.397 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 72854 00:14:37.656 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:37.656 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:37.657 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:37.657 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:14:37.657 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:14:37.657 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:37.657 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:14:37.657 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:37.657 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:37.657 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:37.657 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:37.657 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:37.657 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:37.657 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:37.657 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:37.916 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:37.916 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:37.916 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:37.916 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:37.916 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:37.916 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:37.916 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:37.916 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:37.916 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:37.916 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:37.916 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:37.916 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:14:37.916 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.uOA 00:14:37.916 ************************************ 00:14:37.916 END TEST nvmf_fips 00:14:37.916 ************************************ 00:14:37.916 00:14:37.916 real 0m14.393s 00:14:37.916 user 0m19.393s 00:14:37.916 sys 0m5.807s 00:14:37.916 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:37.916 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:37.916 10:32:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:14:37.916 10:32:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:37.916 10:32:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:37.916 10:32:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:37.916 ************************************ 00:14:37.916 START TEST nvmf_control_msg_list 00:14:37.916 ************************************ 00:14:37.916 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:14:38.176 * Looking for test storage... 00:14:38.176 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:38.176 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:38.176 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lcov --version 00:14:38.176 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:38.176 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:38.176 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:38.176 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:38.176 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:38.177 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:14:38.177 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:14:38.177 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:14:38.177 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:14:38.177 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:14:38.177 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:14:38.177 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:14:38.177 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:38.177 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:14:38.177 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:14:38.177 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:38.177 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:38.177 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:14:38.177 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:14:38.177 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:38.177 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:14:38.177 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:14:38.177 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:14:38.177 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:14:38.177 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:38.177 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:14:38.177 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:14:38.177 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:38.177 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:38.177 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:14:38.177 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:38.177 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:38.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.177 --rc genhtml_branch_coverage=1 00:14:38.177 --rc genhtml_function_coverage=1 00:14:38.177 --rc genhtml_legend=1 00:14:38.177 --rc geninfo_all_blocks=1 00:14:38.177 --rc geninfo_unexecuted_blocks=1 00:14:38.177 00:14:38.177 ' 00:14:38.177 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:38.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.177 --rc genhtml_branch_coverage=1 00:14:38.177 --rc genhtml_function_coverage=1 00:14:38.177 --rc genhtml_legend=1 00:14:38.177 --rc geninfo_all_blocks=1 00:14:38.177 --rc geninfo_unexecuted_blocks=1 00:14:38.177 00:14:38.177 ' 00:14:38.177 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:38.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.177 --rc genhtml_branch_coverage=1 00:14:38.177 --rc genhtml_function_coverage=1 00:14:38.177 --rc genhtml_legend=1 00:14:38.177 --rc geninfo_all_blocks=1 00:14:38.177 --rc geninfo_unexecuted_blocks=1 00:14:38.177 00:14:38.177 ' 00:14:38.177 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:38.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.177 --rc genhtml_branch_coverage=1 00:14:38.177 --rc genhtml_function_coverage=1 00:14:38.177 --rc genhtml_legend=1 00:14:38.177 --rc geninfo_all_blocks=1 00:14:38.177 --rc geninfo_unexecuted_blocks=1 00:14:38.177 00:14:38.177 ' 00:14:38.177 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:38.177 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:14:38.177 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:38.177 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:38.177 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:38.177 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:38.177 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:38.177 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:38.177 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:38.177 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:38.177 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:38.177 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:38.177 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:14:38.177 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:14:38.177 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:38.177 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:38.177 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:38.177 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:38.177 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:38.177 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:14:38.177 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:38.177 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:38.177 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:38.177 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.177 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.177 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.177 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:14:38.177 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.177 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:14:38.178 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:38.178 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:38.178 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:38.178 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:38.178 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:38.178 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:38.178 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:38.178 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:38.178 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:38.178 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:38.178 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:14:38.178 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:38.178 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:38.178 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:38.178 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:38.178 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:38.178 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:38.178 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:38.178 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:38.178 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:38.178 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:38.178 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:38.178 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:38.178 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:38.178 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:38.178 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:38.178 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:38.178 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:38.178 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:38.178 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:38.178 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:38.178 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:38.178 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:38.178 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:38.178 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:38.178 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:38.178 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:38.178 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:38.178 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:38.178 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:38.178 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:38.178 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:38.178 Cannot find device "nvmf_init_br" 00:14:38.178 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:14:38.178 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:38.178 Cannot find device "nvmf_init_br2" 00:14:38.178 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:14:38.178 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:38.178 Cannot find device "nvmf_tgt_br" 00:14:38.178 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:14:38.178 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:38.178 Cannot find device "nvmf_tgt_br2" 00:14:38.178 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:14:38.178 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:38.178 Cannot find device "nvmf_init_br" 00:14:38.178 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:14:38.178 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:38.437 Cannot find device "nvmf_init_br2" 00:14:38.437 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:14:38.437 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:38.437 Cannot find device "nvmf_tgt_br" 00:14:38.437 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:14:38.437 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:38.437 Cannot find device "nvmf_tgt_br2" 00:14:38.437 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:14:38.437 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:38.437 Cannot find device "nvmf_br" 00:14:38.437 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:14:38.437 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:38.437 Cannot find device "nvmf_init_if" 00:14:38.437 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:14:38.437 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:38.437 Cannot find device "nvmf_init_if2" 00:14:38.437 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:14:38.437 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:38.437 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:38.437 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:14:38.437 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:38.437 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:38.437 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:14:38.437 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:38.437 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:38.437 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:38.437 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:38.437 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:38.437 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:38.437 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:38.437 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:38.437 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:38.437 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:38.437 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:38.437 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:38.437 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:38.437 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:38.437 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:38.437 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:38.437 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:38.437 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:38.437 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:38.437 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:38.437 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:38.437 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:38.437 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:38.437 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:38.437 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:38.696 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:38.696 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:38.696 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:38.696 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:38.696 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:38.696 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:38.696 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:38.696 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:38.696 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:38.696 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.089 ms 00:14:38.696 00:14:38.696 --- 10.0.0.3 ping statistics --- 00:14:38.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.696 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:14:38.696 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:38.696 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:38.696 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.065 ms 00:14:38.696 00:14:38.696 --- 10.0.0.4 ping statistics --- 00:14:38.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.696 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:14:38.697 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:38.697 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:38.697 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:14:38.697 00:14:38.697 --- 10.0.0.1 ping statistics --- 00:14:38.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.697 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:14:38.697 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:38.697 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:38.697 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:14:38.697 00:14:38.697 --- 10.0.0.2 ping statistics --- 00:14:38.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.697 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:14:38.697 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:38.697 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:14:38.697 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:38.697 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:38.697 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:38.697 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:38.697 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:38.697 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:38.697 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:38.697 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:14:38.697 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:38.697 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:38.697 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:38.697 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=73286 00:14:38.697 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:38.697 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 73286 00:14:38.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:38.697 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@833 -- # '[' -z 73286 ']' 00:14:38.697 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:38.697 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:38.697 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:38.697 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:38.697 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:38.697 [2024-11-15 10:32:39.459017] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:14:38.697 [2024-11-15 10:32:39.459187] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:38.955 [2024-11-15 10:32:39.626132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:38.955 [2024-11-15 10:32:39.709362] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:38.955 [2024-11-15 10:32:39.709432] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:38.955 [2024-11-15 10:32:39.709448] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:38.955 [2024-11-15 10:32:39.709458] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:38.956 [2024-11-15 10:32:39.709467] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:38.956 [2024-11-15 10:32:39.709938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.956 [2024-11-15 10:32:39.767366] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:39.892 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:39.892 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@866 -- # return 0 00:14:39.892 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:39.892 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:39.892 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:39.892 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:39.892 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:14:39.892 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:14:39.892 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:14:39.892 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.892 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:39.892 [2024-11-15 10:32:40.601235] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:39.892 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.892 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:14:39.892 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.892 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:39.892 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.892 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:14:39.892 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.892 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:39.892 Malloc0 00:14:39.892 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.892 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:14:39.892 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.892 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:39.892 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.892 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:14:39.892 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.892 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:39.892 [2024-11-15 10:32:40.641124] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:39.892 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.892 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=73324 00:14:39.892 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:39.892 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=73325 00:14:39.892 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:39.892 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=73326 00:14:39.892 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:39.892 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 73324 00:14:40.151 [2024-11-15 10:32:40.829493] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:40.151 [2024-11-15 10:32:40.839706] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:40.152 [2024-11-15 10:32:40.840351] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:41.089 Initializing NVMe Controllers 00:14:41.089 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:14:41.089 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:14:41.089 Initialization complete. Launching workers. 00:14:41.089 ======================================================== 00:14:41.089 Latency(us) 00:14:41.089 Device Information : IOPS MiB/s Average min max 00:14:41.089 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3280.00 12.81 304.53 141.52 2195.71 00:14:41.089 ======================================================== 00:14:41.089 Total : 3280.00 12.81 304.53 141.52 2195.71 00:14:41.089 00:14:41.089 Initializing NVMe Controllers 00:14:41.089 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:14:41.089 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:14:41.089 Initialization complete. Launching workers. 00:14:41.089 ======================================================== 00:14:41.089 Latency(us) 00:14:41.089 Device Information : IOPS MiB/s Average min max 00:14:41.089 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3295.00 12.87 303.06 164.32 712.47 00:14:41.089 ======================================================== 00:14:41.089 Total : 3295.00 12.87 303.06 164.32 712.47 00:14:41.089 00:14:41.089 Initializing NVMe Controllers 00:14:41.089 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:14:41.089 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:14:41.089 Initialization complete. Launching workers. 00:14:41.089 ======================================================== 00:14:41.089 Latency(us) 00:14:41.089 Device Information : IOPS MiB/s Average min max 00:14:41.089 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3298.99 12.89 302.71 179.14 797.29 00:14:41.089 ======================================================== 00:14:41.089 Total : 3298.99 12.89 302.71 179.14 797.29 00:14:41.089 00:14:41.089 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 73325 00:14:41.089 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 73326 00:14:41.089 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:14:41.089 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:14:41.089 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:41.089 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:14:41.089 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:41.089 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:14:41.089 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:41.089 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:41.089 rmmod nvme_tcp 00:14:41.349 rmmod nvme_fabrics 00:14:41.349 rmmod nvme_keyring 00:14:41.349 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:41.349 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:14:41.349 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:14:41.349 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 73286 ']' 00:14:41.349 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 73286 00:14:41.349 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@952 -- # '[' -z 73286 ']' 00:14:41.349 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # kill -0 73286 00:14:41.349 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # uname 00:14:41.349 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:41.349 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73286 00:14:41.349 killing process with pid 73286 00:14:41.349 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:41.349 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:41.349 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73286' 00:14:41.349 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@971 -- # kill 73286 00:14:41.349 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@976 -- # wait 73286 00:14:41.608 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:41.608 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:41.608 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:41.608 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:14:41.608 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:14:41.608 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:14:41.608 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:41.608 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:41.608 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:41.608 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:41.608 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:41.608 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:41.608 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:41.608 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:41.608 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:41.608 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:41.608 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:41.608 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:41.869 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:41.869 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:41.869 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:41.869 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:41.869 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:41.869 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:41.869 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:41.869 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:41.869 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:14:41.869 00:14:41.869 real 0m3.878s 00:14:41.869 user 0m5.952s 00:14:41.869 sys 0m1.471s 00:14:41.869 ************************************ 00:14:41.869 END TEST nvmf_control_msg_list 00:14:41.869 ************************************ 00:14:41.869 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:41.869 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:41.869 10:32:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:14:41.869 10:32:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:41.869 10:32:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:41.869 10:32:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:41.869 ************************************ 00:14:41.869 START TEST nvmf_wait_for_buf 00:14:41.869 ************************************ 00:14:41.869 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:14:42.129 * Looking for test storage... 00:14:42.130 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:42.130 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:42.130 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lcov --version 00:14:42.130 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:42.130 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:42.130 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:42.130 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:42.130 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:42.130 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:14:42.130 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:14:42.130 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:14:42.130 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:14:42.130 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:14:42.130 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:14:42.130 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:14:42.130 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:42.130 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:14:42.130 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:14:42.130 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:42.130 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:42.130 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:14:42.130 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:14:42.130 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:42.130 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:14:42.130 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:14:42.130 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:14:42.130 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:14:42.130 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:42.130 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:14:42.130 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:14:42.130 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:42.130 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:42.130 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:14:42.130 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:42.130 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:42.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.130 --rc genhtml_branch_coverage=1 00:14:42.130 --rc genhtml_function_coverage=1 00:14:42.130 --rc genhtml_legend=1 00:14:42.130 --rc geninfo_all_blocks=1 00:14:42.130 --rc geninfo_unexecuted_blocks=1 00:14:42.130 00:14:42.130 ' 00:14:42.130 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:42.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.130 --rc genhtml_branch_coverage=1 00:14:42.130 --rc genhtml_function_coverage=1 00:14:42.130 --rc genhtml_legend=1 00:14:42.130 --rc geninfo_all_blocks=1 00:14:42.130 --rc geninfo_unexecuted_blocks=1 00:14:42.130 00:14:42.130 ' 00:14:42.130 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:42.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.130 --rc genhtml_branch_coverage=1 00:14:42.130 --rc genhtml_function_coverage=1 00:14:42.130 --rc genhtml_legend=1 00:14:42.130 --rc geninfo_all_blocks=1 00:14:42.130 --rc geninfo_unexecuted_blocks=1 00:14:42.130 00:14:42.130 ' 00:14:42.130 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:42.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.130 --rc genhtml_branch_coverage=1 00:14:42.130 --rc genhtml_function_coverage=1 00:14:42.130 --rc genhtml_legend=1 00:14:42.130 --rc geninfo_all_blocks=1 00:14:42.130 --rc geninfo_unexecuted_blocks=1 00:14:42.130 00:14:42.130 ' 00:14:42.130 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:42.130 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:14:42.130 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:42.130 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:42.130 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:42.130 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:42.130 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:42.130 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:42.130 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:42.130 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:42.130 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:42.130 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:42.130 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:14:42.130 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:14:42.130 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:42.130 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:42.130 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:42.130 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:42.130 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:42.130 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:14:42.130 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:42.130 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:42.130 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:42.130 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.130 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.131 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.131 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:14:42.131 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.131 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:14:42.131 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:42.131 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:42.131 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:42.131 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:42.131 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:42.131 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:42.131 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:42.131 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:42.131 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:42.131 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:42.131 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:14:42.131 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:42.131 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:42.131 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:42.131 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:42.131 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:42.131 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:42.131 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:42.131 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:42.131 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:42.131 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:42.131 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:42.131 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:42.131 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:42.131 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:42.131 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:42.131 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:42.131 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:42.131 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:42.131 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:42.131 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:42.131 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:42.131 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:42.131 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:42.131 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:42.131 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:42.131 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:42.131 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:42.131 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:42.131 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:42.131 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:42.131 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:42.131 Cannot find device "nvmf_init_br" 00:14:42.131 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:14:42.131 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:42.131 Cannot find device "nvmf_init_br2" 00:14:42.131 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:14:42.131 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:42.131 Cannot find device "nvmf_tgt_br" 00:14:42.131 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:14:42.131 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:42.131 Cannot find device "nvmf_tgt_br2" 00:14:42.131 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:14:42.131 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:42.131 Cannot find device "nvmf_init_br" 00:14:42.131 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:14:42.131 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:42.131 Cannot find device "nvmf_init_br2" 00:14:42.131 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:14:42.131 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:42.131 Cannot find device "nvmf_tgt_br" 00:14:42.131 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:14:42.131 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:42.131 Cannot find device "nvmf_tgt_br2" 00:14:42.131 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:14:42.131 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:42.390 Cannot find device "nvmf_br" 00:14:42.390 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:14:42.390 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:42.390 Cannot find device "nvmf_init_if" 00:14:42.390 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:14:42.390 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:42.390 Cannot find device "nvmf_init_if2" 00:14:42.390 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:14:42.390 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:42.390 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:42.390 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:14:42.390 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:42.390 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:42.390 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:14:42.390 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:42.390 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:42.390 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:42.390 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:42.390 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:42.390 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:42.390 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:42.390 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:42.390 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:42.390 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:42.390 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:42.390 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:42.390 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:42.390 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:42.390 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:42.390 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:42.390 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:42.390 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:42.390 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:42.390 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:42.391 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:42.391 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:42.391 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:42.391 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:42.391 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:42.391 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:42.649 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:42.649 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:42.649 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:42.649 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:42.649 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:42.649 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:42.649 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:42.649 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:42.649 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.130 ms 00:14:42.649 00:14:42.649 --- 10.0.0.3 ping statistics --- 00:14:42.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:42.649 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:14:42.649 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:42.649 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:42.649 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.071 ms 00:14:42.649 00:14:42.649 --- 10.0.0.4 ping statistics --- 00:14:42.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:42.649 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:14:42.649 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:42.649 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:42.649 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:14:42.649 00:14:42.649 --- 10.0.0.1 ping statistics --- 00:14:42.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:42.649 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:14:42.649 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:42.649 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:42.649 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:14:42.649 00:14:42.649 --- 10.0.0.2 ping statistics --- 00:14:42.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:42.649 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:14:42.649 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:42.649 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:14:42.649 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:42.649 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:42.649 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:42.649 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:42.649 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:42.649 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:42.649 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:42.649 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:14:42.649 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:42.649 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:42.649 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:42.649 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=73563 00:14:42.649 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:14:42.649 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 73563 00:14:42.649 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@833 -- # '[' -z 73563 ']' 00:14:42.649 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:42.649 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:42.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:42.649 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:42.649 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:42.649 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:42.649 [2024-11-15 10:32:43.374754] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:14:42.649 [2024-11-15 10:32:43.375436] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:42.906 [2024-11-15 10:32:43.517492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:42.906 [2024-11-15 10:32:43.605045] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:42.906 [2024-11-15 10:32:43.605152] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:42.906 [2024-11-15 10:32:43.605172] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:42.906 [2024-11-15 10:32:43.605186] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:42.906 [2024-11-15 10:32:43.605194] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:42.906 [2024-11-15 10:32:43.605670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:42.906 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:42.906 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@866 -- # return 0 00:14:42.906 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:42.906 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:42.906 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:42.906 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:42.906 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:14:42.906 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:14:42.906 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:14:42.906 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.906 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:42.906 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.906 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:14:42.906 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.906 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:42.906 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.906 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:14:42.906 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.906 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:43.163 [2024-11-15 10:32:43.777635] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:43.163 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.163 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:14:43.163 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.163 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:43.163 Malloc0 00:14:43.163 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.163 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:14:43.163 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.163 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:43.163 [2024-11-15 10:32:43.858819] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:43.163 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.163 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:14:43.163 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.163 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:43.163 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.163 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:14:43.163 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.163 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:43.163 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.163 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:14:43.164 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.164 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:43.164 [2024-11-15 10:32:43.883035] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:43.164 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.164 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:43.421 [2024-11-15 10:32:44.094279] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:44.799 Initializing NVMe Controllers 00:14:44.799 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:14:44.799 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:14:44.799 Initialization complete. Launching workers. 00:14:44.799 ======================================================== 00:14:44.799 Latency(us) 00:14:44.799 Device Information : IOPS MiB/s Average min max 00:14:44.799 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 500.97 62.62 7999.34 7843.57 8195.27 00:14:44.799 ======================================================== 00:14:44.799 Total : 500.97 62.62 7999.34 7843.57 8195.27 00:14:44.799 00:14:44.799 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:14:44.799 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:14:44.799 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.799 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:44.799 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.799 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4750 00:14:44.799 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4750 -eq 0 ]] 00:14:44.799 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:14:44.799 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:14:44.799 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:44.799 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:14:44.799 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:44.799 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:14:44.799 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:44.799 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:44.799 rmmod nvme_tcp 00:14:44.799 rmmod nvme_fabrics 00:14:44.799 rmmod nvme_keyring 00:14:44.799 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:44.799 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:14:44.799 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:14:44.799 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 73563 ']' 00:14:44.799 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 73563 00:14:44.799 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@952 -- # '[' -z 73563 ']' 00:14:44.799 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # kill -0 73563 00:14:44.799 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # uname 00:14:44.799 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:44.799 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73563 00:14:44.799 killing process with pid 73563 00:14:44.799 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:44.799 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:44.799 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73563' 00:14:44.799 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@971 -- # kill 73563 00:14:44.799 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@976 -- # wait 73563 00:14:45.059 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:45.059 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:45.059 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:45.059 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:14:45.059 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:14:45.059 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:14:45.059 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:45.059 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:45.059 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:45.059 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:45.059 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:45.059 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:45.059 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:45.318 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:45.318 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:45.318 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:45.318 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:45.318 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:45.318 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:45.318 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:45.318 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:45.318 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:45.318 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:45.318 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:45.318 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:45.318 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:45.318 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:14:45.318 00:14:45.318 real 0m3.450s 00:14:45.318 user 0m2.731s 00:14:45.318 sys 0m0.869s 00:14:45.318 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:45.318 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:45.318 ************************************ 00:14:45.318 END TEST nvmf_wait_for_buf 00:14:45.318 ************************************ 00:14:45.318 10:32:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:14:45.318 10:32:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:14:45.318 10:32:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:14:45.318 10:32:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:45.318 10:32:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:45.318 10:32:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:45.318 ************************************ 00:14:45.318 START TEST nvmf_nsid 00:14:45.318 ************************************ 00:14:45.318 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:14:45.631 * Looking for test storage... 00:14:45.631 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:45.631 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:45.631 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lcov --version 00:14:45.631 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:45.631 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:45.631 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:45.631 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:45.631 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:45.631 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:14:45.631 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:14:45.631 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:14:45.631 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:14:45.631 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:14:45.631 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:14:45.631 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:14:45.631 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:45.631 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:14:45.631 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:14:45.631 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:45.631 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:45.631 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:14:45.631 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:14:45.631 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:45.631 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:14:45.631 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:14:45.631 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:14:45.631 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:14:45.631 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:45.631 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:14:45.631 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:14:45.631 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:45.631 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:45.632 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:14:45.632 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:45.632 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:45.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:45.632 --rc genhtml_branch_coverage=1 00:14:45.632 --rc genhtml_function_coverage=1 00:14:45.632 --rc genhtml_legend=1 00:14:45.632 --rc geninfo_all_blocks=1 00:14:45.632 --rc geninfo_unexecuted_blocks=1 00:14:45.632 00:14:45.632 ' 00:14:45.632 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:45.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:45.632 --rc genhtml_branch_coverage=1 00:14:45.632 --rc genhtml_function_coverage=1 00:14:45.632 --rc genhtml_legend=1 00:14:45.632 --rc geninfo_all_blocks=1 00:14:45.632 --rc geninfo_unexecuted_blocks=1 00:14:45.632 00:14:45.632 ' 00:14:45.632 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:45.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:45.632 --rc genhtml_branch_coverage=1 00:14:45.632 --rc genhtml_function_coverage=1 00:14:45.632 --rc genhtml_legend=1 00:14:45.632 --rc geninfo_all_blocks=1 00:14:45.632 --rc geninfo_unexecuted_blocks=1 00:14:45.632 00:14:45.632 ' 00:14:45.632 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:45.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:45.632 --rc genhtml_branch_coverage=1 00:14:45.632 --rc genhtml_function_coverage=1 00:14:45.632 --rc genhtml_legend=1 00:14:45.632 --rc geninfo_all_blocks=1 00:14:45.632 --rc geninfo_unexecuted_blocks=1 00:14:45.632 00:14:45.632 ' 00:14:45.632 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:45.632 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:14:45.632 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:45.632 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:45.632 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:45.632 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:45.632 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:45.632 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:45.632 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:45.632 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:45.632 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:45.632 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:45.632 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:14:45.632 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:14:45.632 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:45.632 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:45.632 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:45.632 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:45.632 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:45.632 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:14:45.632 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:45.632 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:45.632 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:45.632 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.632 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.632 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.632 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:14:45.632 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.632 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:14:45.632 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:45.632 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:45.632 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:45.632 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:45.632 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:45.632 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:45.632 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:45.632 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:45.632 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:45.632 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:45.632 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:14:45.632 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:14:45.632 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:14:45.632 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:14:45.632 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:14:45.632 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:14:45.632 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:45.632 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:45.632 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:45.632 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:45.632 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:45.632 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:45.632 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:45.633 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:45.633 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:45.633 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:45.633 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:45.633 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:45.633 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:45.633 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:45.633 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:45.633 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:45.633 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:45.633 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:45.633 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:45.633 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:45.633 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:45.633 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:45.633 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:45.633 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:45.633 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:45.633 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:45.633 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:45.633 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:45.633 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:45.633 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:45.633 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:45.633 Cannot find device "nvmf_init_br" 00:14:45.633 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 00:14:45.633 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:45.633 Cannot find device "nvmf_init_br2" 00:14:45.633 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 00:14:45.633 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:45.633 Cannot find device "nvmf_tgt_br" 00:14:45.633 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 00:14:45.633 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:45.633 Cannot find device "nvmf_tgt_br2" 00:14:45.633 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 00:14:45.633 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:45.633 Cannot find device "nvmf_init_br" 00:14:45.633 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 00:14:45.633 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:45.633 Cannot find device "nvmf_init_br2" 00:14:45.633 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 00:14:45.633 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:45.633 Cannot find device "nvmf_tgt_br" 00:14:45.633 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 00:14:45.633 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:45.633 Cannot find device "nvmf_tgt_br2" 00:14:45.633 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 00:14:45.633 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:45.895 Cannot find device "nvmf_br" 00:14:45.895 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 00:14:45.895 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:45.895 Cannot find device "nvmf_init_if" 00:14:45.895 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 00:14:45.895 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:45.895 Cannot find device "nvmf_init_if2" 00:14:45.895 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 00:14:45.895 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:45.895 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:45.895 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 00:14:45.895 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:45.895 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:45.895 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 00:14:45.895 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:45.895 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:45.895 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:45.895 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:45.895 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:45.895 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:45.895 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:45.895 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:45.895 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:45.895 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:45.895 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:45.895 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:45.895 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:45.895 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:45.895 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:45.895 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:45.895 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:45.895 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:45.895 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:45.895 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:45.895 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:45.895 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:45.895 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:45.895 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:45.895 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:45.895 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:45.895 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:45.896 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:46.157 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:46.157 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:46.157 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:46.157 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:46.157 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:46.157 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:46.157 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:14:46.157 00:14:46.157 --- 10.0.0.3 ping statistics --- 00:14:46.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:46.157 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:14:46.157 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:46.157 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:46.157 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.067 ms 00:14:46.157 00:14:46.157 --- 10.0.0.4 ping statistics --- 00:14:46.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:46.157 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:14:46.157 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:46.157 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:46.157 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:14:46.157 00:14:46.157 --- 10.0.0.1 ping statistics --- 00:14:46.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:46.157 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:14:46.157 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:46.157 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:46.157 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:14:46.157 00:14:46.157 --- 10.0.0.2 ping statistics --- 00:14:46.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:46.157 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:14:46.157 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:46.157 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 00:14:46.157 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:46.157 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:46.157 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:46.157 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:46.157 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:46.157 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:46.157 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:46.157 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:14:46.157 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:46.157 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:46.157 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:14:46.157 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=73829 00:14:46.157 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:14:46.157 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 73829 00:14:46.157 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 73829 ']' 00:14:46.157 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:46.157 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:46.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:46.157 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:46.157 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:46.157 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:14:46.157 [2024-11-15 10:32:46.861541] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:14:46.157 [2024-11-15 10:32:46.861646] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:46.417 [2024-11-15 10:32:47.009471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:46.417 [2024-11-15 10:32:47.067076] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:46.417 [2024-11-15 10:32:47.067138] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:46.417 [2024-11-15 10:32:47.067150] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:46.417 [2024-11-15 10:32:47.067159] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:46.417 [2024-11-15 10:32:47.067167] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:46.417 [2024-11-15 10:32:47.067579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:46.417 [2024-11-15 10:32:47.120121] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:46.417 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:46.417 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:14:46.417 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:46.417 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:46.417 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:14:46.417 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:46.417 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:46.417 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=73848 00:14:46.417 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:14:46.417 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 00:14:46.417 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:14:46.417 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:14:46.417 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:14:46.417 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:14:46.417 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:14:46.417 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:14:46.417 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:14:46.417 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:14:46.417 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:14:46.417 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:14:46.417 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:14:46.417 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:14:46.417 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:14:46.417 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=fb0bb0ca-edc5-410f-a8a6-842c82980726 00:14:46.417 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:14:46.417 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=efd38e62-332a-4b5c-b572-95a2a475aebe 00:14:46.417 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:14:46.417 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=6994bea5-e43f-478a-a197-9f56cb8d0b44 00:14:46.417 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:14:46.417 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.417 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:14:46.417 null0 00:14:46.676 null1 00:14:46.676 null2 00:14:46.676 [2024-11-15 10:32:47.283619] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:46.676 [2024-11-15 10:32:47.297779] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:14:46.676 [2024-11-15 10:32:47.297883] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73848 ] 00:14:46.676 [2024-11-15 10:32:47.307807] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:46.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:14:46.676 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.676 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 73848 /var/tmp/tgt2.sock 00:14:46.676 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 73848 ']' 00:14:46.676 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/tgt2.sock 00:14:46.676 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:46.676 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:14:46.676 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:46.676 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:14:46.676 [2024-11-15 10:32:47.449522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:46.676 [2024-11-15 10:32:47.517847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:46.935 [2024-11-15 10:32:47.597640] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:47.195 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:47.195 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:14:47.195 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:14:47.454 [2024-11-15 10:32:48.240667] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:47.454 [2024-11-15 10:32:48.256885] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:14:47.454 nvme0n1 nvme0n2 00:14:47.454 nvme1n1 00:14:47.713 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:14:47.713 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:14:47.713 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid=b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:14:47.713 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:14:47.713 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:14:47.713 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:14:47.713 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:14:47.713 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 00:14:47.713 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:14:47.713 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:14:47.713 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:14:47.713 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:14:47.713 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:14:47.713 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # '[' 0 -lt 15 ']' 00:14:47.713 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # i=1 00:14:47.713 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # sleep 1 00:14:48.651 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:14:48.651 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:14:48.651 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:14:48.651 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:14:48.651 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:14:48.651 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid fb0bb0ca-edc5-410f-a8a6-842c82980726 00:14:48.651 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:14:48.651 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:14:48.651 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:14:48.651 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:14:48.651 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:14:48.910 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=fb0bb0caedc5410fa8a6842c82980726 00:14:48.910 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo FB0BB0CAEDC5410FA8A6842C82980726 00:14:48.910 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ FB0BB0CAEDC5410FA8A6842C82980726 == \F\B\0\B\B\0\C\A\E\D\C\5\4\1\0\F\A\8\A\6\8\4\2\C\8\2\9\8\0\7\2\6 ]] 00:14:48.910 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:14:48.910 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:14:48.910 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:14:48.910 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n2 00:14:48.910 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n2 00:14:48.910 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:14:48.910 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:14:48.910 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid efd38e62-332a-4b5c-b572-95a2a475aebe 00:14:48.910 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:14:48.910 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:14:48.910 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:14:48.910 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:14:48.910 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:14:48.910 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=efd38e62332a4b5cb57295a2a475aebe 00:14:48.910 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo EFD38E62332A4B5CB57295A2A475AEBE 00:14:48.910 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ EFD38E62332A4B5CB57295A2A475AEBE == \E\F\D\3\8\E\6\2\3\3\2\A\4\B\5\C\B\5\7\2\9\5\A\2\A\4\7\5\A\E\B\E ]] 00:14:48.910 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:14:48.910 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:14:48.910 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:14:48.910 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n3 00:14:48.910 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:14:48.910 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n3 00:14:48.910 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:14:48.910 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 6994bea5-e43f-478a-a197-9f56cb8d0b44 00:14:48.910 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:14:48.910 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:14:48.910 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:14:48.910 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:14:48.910 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:14:48.910 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=6994bea5e43f478aa1979f56cb8d0b44 00:14:48.910 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 6994BEA5E43F478AA1979F56CB8D0B44 00:14:48.910 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 6994BEA5E43F478AA1979F56CB8D0B44 == \6\9\9\4\B\E\A\5\E\4\3\F\4\7\8\A\A\1\9\7\9\F\5\6\C\B\8\D\0\B\4\4 ]] 00:14:48.910 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:14:49.170 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:14:49.170 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:14:49.170 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 73848 00:14:49.170 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 73848 ']' 00:14:49.170 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 73848 00:14:49.170 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:14:49.170 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:49.170 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73848 00:14:49.170 killing process with pid 73848 00:14:49.170 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:14:49.170 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:14:49.170 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73848' 00:14:49.170 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 73848 00:14:49.170 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 73848 00:14:49.739 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:14:49.739 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:49.739 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:14:49.739 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:49.739 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:14:49.739 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:49.739 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:49.739 rmmod nvme_tcp 00:14:49.739 rmmod nvme_fabrics 00:14:49.739 rmmod nvme_keyring 00:14:49.739 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:49.739 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:14:49.739 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:14:49.739 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 73829 ']' 00:14:49.739 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 73829 00:14:49.739 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 73829 ']' 00:14:49.739 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 73829 00:14:49.739 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:14:49.739 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:49.739 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73829 00:14:49.739 killing process with pid 73829 00:14:49.739 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:49.739 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:49.739 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73829' 00:14:49.739 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 73829 00:14:49.739 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 73829 00:14:49.998 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:49.998 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:49.998 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:49.998 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:14:49.998 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:14:49.998 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:49.998 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:14:49.998 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:49.998 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:49.998 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:49.998 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:49.998 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:49.998 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:49.998 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:49.998 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:49.998 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:49.998 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:49.998 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:49.998 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:49.998 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:50.258 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:50.258 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:50.258 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:50.258 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:50.258 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:50.258 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:50.258 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 00:14:50.258 00:14:50.258 real 0m4.769s 00:14:50.258 user 0m7.158s 00:14:50.258 sys 0m1.716s 00:14:50.258 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:50.258 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:14:50.258 ************************************ 00:14:50.258 END TEST nvmf_nsid 00:14:50.258 ************************************ 00:14:50.258 10:32:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:14:50.258 00:14:50.258 real 5m16.627s 00:14:50.258 user 11m3.729s 00:14:50.258 sys 1m9.776s 00:14:50.258 ************************************ 00:14:50.258 END TEST nvmf_target_extra 00:14:50.258 ************************************ 00:14:50.258 10:32:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:50.258 10:32:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:50.258 10:32:50 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:14:50.258 10:32:51 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:50.258 10:32:51 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:50.258 10:32:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:50.258 ************************************ 00:14:50.258 START TEST nvmf_host 00:14:50.258 ************************************ 00:14:50.258 10:32:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:14:50.258 * Looking for test storage... 00:14:50.258 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:14:50.258 10:32:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:50.258 10:32:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:50.258 10:32:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:14:50.518 10:32:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:50.518 10:32:51 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:50.518 10:32:51 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:50.518 10:32:51 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:50.518 10:32:51 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:14:50.518 10:32:51 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:14:50.518 10:32:51 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:14:50.518 10:32:51 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:14:50.518 10:32:51 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:14:50.518 10:32:51 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:14:50.518 10:32:51 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:14:50.518 10:32:51 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:50.518 10:32:51 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:14:50.518 10:32:51 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:14:50.518 10:32:51 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:50.518 10:32:51 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:50.518 10:32:51 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:14:50.518 10:32:51 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:14:50.518 10:32:51 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:50.518 10:32:51 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:14:50.518 10:32:51 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:14:50.518 10:32:51 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:14:50.518 10:32:51 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:14:50.518 10:32:51 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:50.518 10:32:51 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:14:50.518 10:32:51 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:14:50.518 10:32:51 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:50.518 10:32:51 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:50.518 10:32:51 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:14:50.518 10:32:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:50.518 10:32:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:50.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.518 --rc genhtml_branch_coverage=1 00:14:50.518 --rc genhtml_function_coverage=1 00:14:50.518 --rc genhtml_legend=1 00:14:50.518 --rc geninfo_all_blocks=1 00:14:50.518 --rc geninfo_unexecuted_blocks=1 00:14:50.518 00:14:50.518 ' 00:14:50.518 10:32:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:50.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.518 --rc genhtml_branch_coverage=1 00:14:50.518 --rc genhtml_function_coverage=1 00:14:50.518 --rc genhtml_legend=1 00:14:50.518 --rc geninfo_all_blocks=1 00:14:50.518 --rc geninfo_unexecuted_blocks=1 00:14:50.518 00:14:50.518 ' 00:14:50.518 10:32:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:50.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.518 --rc genhtml_branch_coverage=1 00:14:50.518 --rc genhtml_function_coverage=1 00:14:50.518 --rc genhtml_legend=1 00:14:50.518 --rc geninfo_all_blocks=1 00:14:50.518 --rc geninfo_unexecuted_blocks=1 00:14:50.518 00:14:50.518 ' 00:14:50.518 10:32:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:50.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.518 --rc genhtml_branch_coverage=1 00:14:50.518 --rc genhtml_function_coverage=1 00:14:50.518 --rc genhtml_legend=1 00:14:50.518 --rc geninfo_all_blocks=1 00:14:50.518 --rc geninfo_unexecuted_blocks=1 00:14:50.518 00:14:50.518 ' 00:14:50.518 10:32:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:50.518 10:32:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:14:50.518 10:32:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:50.518 10:32:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:50.518 10:32:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:50.518 10:32:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:50.518 10:32:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:50.518 10:32:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:50.518 10:32:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:50.518 10:32:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:50.518 10:32:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:50.518 10:32:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:50.518 10:32:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:14:50.518 10:32:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:14:50.518 10:32:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:50.518 10:32:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:50.518 10:32:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:50.518 10:32:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:50.518 10:32:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:50.518 10:32:51 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:14:50.518 10:32:51 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:50.518 10:32:51 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:50.519 10:32:51 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:50.519 10:32:51 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.519 10:32:51 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.519 10:32:51 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.519 10:32:51 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:14:50.519 10:32:51 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.519 10:32:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:14:50.519 10:32:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:50.519 10:32:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:50.519 10:32:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:50.519 10:32:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:50.519 10:32:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:50.519 10:32:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:50.519 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:50.519 10:32:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:50.519 10:32:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:50.519 10:32:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:50.519 10:32:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:14:50.519 10:32:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:14:50.519 10:32:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:14:50.519 10:32:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:50.519 10:32:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:50.519 10:32:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:50.519 10:32:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:14:50.519 ************************************ 00:14:50.519 START TEST nvmf_identify 00:14:50.519 ************************************ 00:14:50.519 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:50.519 * Looking for test storage... 00:14:50.519 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:50.519 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:50.519 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:14:50.519 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:50.779 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:50.779 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:50.779 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:50.779 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:50.779 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:14:50.779 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:14:50.779 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:14:50.779 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:14:50.779 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:14:50.779 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:14:50.779 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:14:50.779 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:50.779 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:14:50.779 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:14:50.779 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:50.779 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:50.779 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:14:50.779 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:14:50.779 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:50.779 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:14:50.779 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:14:50.779 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:14:50.779 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:14:50.779 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:50.779 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:14:50.779 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:14:50.779 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:50.779 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:50.779 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:14:50.779 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:50.779 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:50.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.779 --rc genhtml_branch_coverage=1 00:14:50.779 --rc genhtml_function_coverage=1 00:14:50.779 --rc genhtml_legend=1 00:14:50.779 --rc geninfo_all_blocks=1 00:14:50.779 --rc geninfo_unexecuted_blocks=1 00:14:50.779 00:14:50.779 ' 00:14:50.780 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:50.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.780 --rc genhtml_branch_coverage=1 00:14:50.780 --rc genhtml_function_coverage=1 00:14:50.780 --rc genhtml_legend=1 00:14:50.780 --rc geninfo_all_blocks=1 00:14:50.780 --rc geninfo_unexecuted_blocks=1 00:14:50.780 00:14:50.780 ' 00:14:50.780 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:50.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.780 --rc genhtml_branch_coverage=1 00:14:50.780 --rc genhtml_function_coverage=1 00:14:50.780 --rc genhtml_legend=1 00:14:50.780 --rc geninfo_all_blocks=1 00:14:50.780 --rc geninfo_unexecuted_blocks=1 00:14:50.780 00:14:50.780 ' 00:14:50.780 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:50.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.780 --rc genhtml_branch_coverage=1 00:14:50.780 --rc genhtml_function_coverage=1 00:14:50.780 --rc genhtml_legend=1 00:14:50.780 --rc geninfo_all_blocks=1 00:14:50.780 --rc geninfo_unexecuted_blocks=1 00:14:50.780 00:14:50.780 ' 00:14:50.780 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:50.780 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:14:50.780 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:50.780 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:50.780 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:50.780 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:50.780 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:50.780 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:50.780 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:50.780 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:50.780 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:50.780 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:50.780 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:14:50.780 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:14:50.780 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:50.780 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:50.780 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:50.780 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:50.780 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:50.780 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:14:50.780 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:50.780 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:50.780 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:50.780 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.780 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.780 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.780 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:14:50.780 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.780 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:14:50.780 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:50.780 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:50.780 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:50.780 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:50.780 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:50.780 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:50.780 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:50.780 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:50.780 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:50.780 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:50.780 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:50.780 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:50.780 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:14:50.780 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:50.780 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:50.780 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:50.780 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:50.780 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:50.781 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:50.781 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:50.781 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:50.781 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:50.781 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:50.781 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:50.781 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:50.781 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:50.781 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:50.781 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:50.781 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:50.781 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:50.781 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:50.781 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:50.781 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:50.781 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:50.781 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:50.781 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:50.781 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:50.781 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:50.781 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:50.781 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:50.781 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:50.781 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:50.781 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:50.781 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:50.781 Cannot find device "nvmf_init_br" 00:14:50.781 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:14:50.781 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:50.781 Cannot find device "nvmf_init_br2" 00:14:50.781 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:14:50.781 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:50.781 Cannot find device "nvmf_tgt_br" 00:14:50.781 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:14:50.781 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:50.781 Cannot find device "nvmf_tgt_br2" 00:14:50.781 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:14:50.781 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:50.781 Cannot find device "nvmf_init_br" 00:14:50.781 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:14:50.781 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:50.781 Cannot find device "nvmf_init_br2" 00:14:50.781 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:14:50.781 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:50.781 Cannot find device "nvmf_tgt_br" 00:14:50.781 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:14:50.781 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:50.781 Cannot find device "nvmf_tgt_br2" 00:14:50.781 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:14:50.781 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:50.781 Cannot find device "nvmf_br" 00:14:50.781 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:14:50.781 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:50.781 Cannot find device "nvmf_init_if" 00:14:50.781 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:14:50.781 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:50.781 Cannot find device "nvmf_init_if2" 00:14:50.781 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:14:50.781 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:50.781 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:50.781 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:14:50.781 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:50.781 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:50.781 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:14:50.781 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:50.781 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:50.781 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:51.063 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:51.063 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:51.063 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:51.063 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:51.064 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:51.064 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:51.064 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:51.064 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:51.064 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:51.064 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:51.064 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:51.064 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:51.064 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:51.064 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:51.064 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:51.064 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:51.064 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:51.064 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:51.064 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:51.064 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:51.064 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:51.064 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:51.064 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:51.064 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:51.064 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:51.064 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:51.064 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:51.064 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:51.064 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:51.064 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:51.064 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:51.064 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:14:51.064 00:14:51.064 --- 10.0.0.3 ping statistics --- 00:14:51.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.064 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:14:51.064 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:51.064 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:51.064 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:14:51.064 00:14:51.064 --- 10.0.0.4 ping statistics --- 00:14:51.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.064 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:14:51.064 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:51.064 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:51.064 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:14:51.064 00:14:51.064 --- 10.0.0.1 ping statistics --- 00:14:51.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.064 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:14:51.064 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:51.064 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:51.064 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:14:51.064 00:14:51.064 --- 10.0.0.2 ping statistics --- 00:14:51.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.064 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:14:51.064 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:51.064 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:14:51.064 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:51.064 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:51.064 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:51.064 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:51.064 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:51.064 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:51.064 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:51.327 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:14:51.327 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:51.327 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:51.327 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=74205 00:14:51.327 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:51.327 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:51.327 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 74205 00:14:51.327 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # '[' -z 74205 ']' 00:14:51.327 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:51.327 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:51.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:51.327 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:51.327 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:51.327 10:32:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:51.327 [2024-11-15 10:32:51.977580] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:14:51.327 [2024-11-15 10:32:51.977681] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:51.327 [2024-11-15 10:32:52.127105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:51.586 [2024-11-15 10:32:52.198115] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:51.586 [2024-11-15 10:32:52.198187] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:51.586 [2024-11-15 10:32:52.198201] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:51.586 [2024-11-15 10:32:52.198213] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:51.586 [2024-11-15 10:32:52.198223] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:51.586 [2024-11-15 10:32:52.199315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:51.586 [2024-11-15 10:32:52.199403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:51.586 [2024-11-15 10:32:52.199451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:51.586 [2024-11-15 10:32:52.199457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:51.586 [2024-11-15 10:32:52.263955] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:51.586 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:51.586 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@866 -- # return 0 00:14:51.586 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:51.586 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.586 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:51.586 [2024-11-15 10:32:52.349753] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:51.586 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.586 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:14:51.586 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:51.586 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:51.586 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:51.586 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.586 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:51.586 Malloc0 00:14:51.586 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.586 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:51.586 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.586 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:51.846 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.846 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:14:51.846 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.846 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:51.846 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.846 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:51.846 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.846 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:51.846 [2024-11-15 10:32:52.453192] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:51.846 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.846 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:14:51.846 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.846 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:51.846 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.846 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:14:51.846 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.846 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:51.846 [ 00:14:51.846 { 00:14:51.846 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:51.846 "subtype": "Discovery", 00:14:51.846 "listen_addresses": [ 00:14:51.846 { 00:14:51.846 "trtype": "TCP", 00:14:51.846 "adrfam": "IPv4", 00:14:51.846 "traddr": "10.0.0.3", 00:14:51.846 "trsvcid": "4420" 00:14:51.846 } 00:14:51.846 ], 00:14:51.846 "allow_any_host": true, 00:14:51.846 "hosts": [] 00:14:51.846 }, 00:14:51.846 { 00:14:51.846 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:51.846 "subtype": "NVMe", 00:14:51.846 "listen_addresses": [ 00:14:51.846 { 00:14:51.846 "trtype": "TCP", 00:14:51.846 "adrfam": "IPv4", 00:14:51.846 "traddr": "10.0.0.3", 00:14:51.846 "trsvcid": "4420" 00:14:51.846 } 00:14:51.846 ], 00:14:51.846 "allow_any_host": true, 00:14:51.846 "hosts": [], 00:14:51.846 "serial_number": "SPDK00000000000001", 00:14:51.846 "model_number": "SPDK bdev Controller", 00:14:51.846 "max_namespaces": 32, 00:14:51.846 "min_cntlid": 1, 00:14:51.846 "max_cntlid": 65519, 00:14:51.846 "namespaces": [ 00:14:51.846 { 00:14:51.846 "nsid": 1, 00:14:51.846 "bdev_name": "Malloc0", 00:14:51.846 "name": "Malloc0", 00:14:51.846 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:14:51.846 "eui64": "ABCDEF0123456789", 00:14:51.846 "uuid": "7fc207f9-7f3d-4541-86a3-f9c64def9c5e" 00:14:51.846 } 00:14:51.846 ] 00:14:51.846 } 00:14:51.846 ] 00:14:51.846 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.846 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:14:51.846 [2024-11-15 10:32:52.508572] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:14:51.846 [2024-11-15 10:32:52.508887] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74232 ] 00:14:51.846 [2024-11-15 10:32:52.680121] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:14:51.846 [2024-11-15 10:32:52.680244] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:14:51.846 [2024-11-15 10:32:52.680260] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:14:51.846 [2024-11-15 10:32:52.680289] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:14:51.846 [2024-11-15 10:32:52.680308] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:14:51.846 [2024-11-15 10:32:52.680875] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:14:51.846 [2024-11-15 10:32:52.681000] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1e88750 0 00:14:51.846 [2024-11-15 10:32:52.688111] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:14:51.847 [2024-11-15 10:32:52.688158] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:14:51.847 [2024-11-15 10:32:52.688171] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:14:51.847 [2024-11-15 10:32:52.688177] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:14:51.847 [2024-11-15 10:32:52.688229] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:51.847 [2024-11-15 10:32:52.688243] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:51.847 [2024-11-15 10:32:52.688251] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e88750) 00:14:51.847 [2024-11-15 10:32:52.688275] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:51.847 [2024-11-15 10:32:52.688327] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eec740, cid 0, qid 0 00:14:51.847 [2024-11-15 10:32:52.696096] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:51.847 [2024-11-15 10:32:52.696138] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:51.847 [2024-11-15 10:32:52.696149] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:51.847 [2024-11-15 10:32:52.696158] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eec740) on tqpair=0x1e88750 00:14:51.847 [2024-11-15 10:32:52.696183] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:14:51.847 [2024-11-15 10:32:52.696206] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:14:51.847 [2024-11-15 10:32:52.696218] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:14:51.847 [2024-11-15 10:32:52.696247] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:51.847 [2024-11-15 10:32:52.696257] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:51.847 [2024-11-15 10:32:52.696265] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e88750) 00:14:51.847 [2024-11-15 10:32:52.696282] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.847 [2024-11-15 10:32:52.696333] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eec740, cid 0, qid 0 00:14:51.847 [2024-11-15 10:32:52.696437] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:51.847 [2024-11-15 10:32:52.696454] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:51.847 [2024-11-15 10:32:52.696461] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:51.847 [2024-11-15 10:32:52.696469] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eec740) on tqpair=0x1e88750 00:14:51.847 [2024-11-15 10:32:52.696479] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:14:51.847 [2024-11-15 10:32:52.696493] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:14:51.847 [2024-11-15 10:32:52.696506] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:51.847 [2024-11-15 10:32:52.696515] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:51.847 [2024-11-15 10:32:52.696522] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e88750) 00:14:51.847 [2024-11-15 10:32:52.696535] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.847 [2024-11-15 10:32:52.696575] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eec740, cid 0, qid 0 00:14:51.847 [2024-11-15 10:32:52.696652] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:51.847 [2024-11-15 10:32:52.696666] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:51.847 [2024-11-15 10:32:52.696674] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:51.847 [2024-11-15 10:32:52.696681] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eec740) on tqpair=0x1e88750 00:14:51.847 [2024-11-15 10:32:52.696693] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:14:51.847 [2024-11-15 10:32:52.696708] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:14:51.847 [2024-11-15 10:32:52.696723] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:51.847 [2024-11-15 10:32:52.696731] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:51.847 [2024-11-15 10:32:52.696737] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e88750) 00:14:51.847 [2024-11-15 10:32:52.696750] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.847 [2024-11-15 10:32:52.696786] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eec740, cid 0, qid 0 00:14:51.847 [2024-11-15 10:32:52.696855] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:51.847 [2024-11-15 10:32:52.696869] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:51.847 [2024-11-15 10:32:52.696877] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:51.847 [2024-11-15 10:32:52.696885] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eec740) on tqpair=0x1e88750 00:14:51.847 [2024-11-15 10:32:52.696895] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:51.847 [2024-11-15 10:32:52.696913] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:51.847 [2024-11-15 10:32:52.696922] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:51.847 [2024-11-15 10:32:52.696929] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e88750) 00:14:51.847 [2024-11-15 10:32:52.696941] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.847 [2024-11-15 10:32:52.696977] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eec740, cid 0, qid 0 00:14:51.847 [2024-11-15 10:32:52.697044] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:51.847 [2024-11-15 10:32:52.697097] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:51.847 [2024-11-15 10:32:52.697108] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:51.847 [2024-11-15 10:32:52.697116] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eec740) on tqpair=0x1e88750 00:14:51.847 [2024-11-15 10:32:52.697127] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:14:51.847 [2024-11-15 10:32:52.697138] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:14:51.847 [2024-11-15 10:32:52.697153] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:51.847 [2024-11-15 10:32:52.697276] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:14:51.847 [2024-11-15 10:32:52.697303] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:51.847 [2024-11-15 10:32:52.697322] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:51.847 [2024-11-15 10:32:52.697332] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:51.847 [2024-11-15 10:32:52.697339] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e88750) 00:14:51.847 [2024-11-15 10:32:52.697352] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.847 [2024-11-15 10:32:52.697392] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eec740, cid 0, qid 0 00:14:51.847 [2024-11-15 10:32:52.697481] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:51.847 [2024-11-15 10:32:52.697500] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:51.847 [2024-11-15 10:32:52.697507] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:51.847 [2024-11-15 10:32:52.697514] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eec740) on tqpair=0x1e88750 00:14:51.847 [2024-11-15 10:32:52.697523] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:52.110 [2024-11-15 10:32:52.697541] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.110 [2024-11-15 10:32:52.697549] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.110 [2024-11-15 10:32:52.697555] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e88750) 00:14:52.110 [2024-11-15 10:32:52.697567] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.110 [2024-11-15 10:32:52.697600] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eec740, cid 0, qid 0 00:14:52.110 [2024-11-15 10:32:52.697673] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.110 [2024-11-15 10:32:52.697702] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.110 [2024-11-15 10:32:52.697712] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.110 [2024-11-15 10:32:52.697719] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eec740) on tqpair=0x1e88750 00:14:52.110 [2024-11-15 10:32:52.697728] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:52.110 [2024-11-15 10:32:52.697737] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:14:52.110 [2024-11-15 10:32:52.697751] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:14:52.110 [2024-11-15 10:32:52.697776] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:14:52.110 [2024-11-15 10:32:52.697794] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.110 [2024-11-15 10:32:52.697801] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e88750) 00:14:52.110 [2024-11-15 10:32:52.697814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.110 [2024-11-15 10:32:52.697850] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eec740, cid 0, qid 0 00:14:52.110 [2024-11-15 10:32:52.698000] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:52.110 [2024-11-15 10:32:52.698030] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:52.111 [2024-11-15 10:32:52.698040] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:52.111 [2024-11-15 10:32:52.698068] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e88750): datao=0, datal=4096, cccid=0 00:14:52.111 [2024-11-15 10:32:52.698082] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1eec740) on tqpair(0x1e88750): expected_datao=0, payload_size=4096 00:14:52.111 [2024-11-15 10:32:52.698091] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.111 [2024-11-15 10:32:52.698107] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:52.111 [2024-11-15 10:32:52.698114] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:52.111 [2024-11-15 10:32:52.698129] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.111 [2024-11-15 10:32:52.698139] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.111 [2024-11-15 10:32:52.698145] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.111 [2024-11-15 10:32:52.698152] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eec740) on tqpair=0x1e88750 00:14:52.111 [2024-11-15 10:32:52.698167] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:14:52.111 [2024-11-15 10:32:52.698177] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:14:52.111 [2024-11-15 10:32:52.698184] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:14:52.111 [2024-11-15 10:32:52.698194] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:14:52.111 [2024-11-15 10:32:52.698202] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:14:52.111 [2024-11-15 10:32:52.698211] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:14:52.111 [2024-11-15 10:32:52.698233] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:14:52.111 [2024-11-15 10:32:52.698246] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.111 [2024-11-15 10:32:52.698253] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.111 [2024-11-15 10:32:52.698260] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e88750) 00:14:52.111 [2024-11-15 10:32:52.698273] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:52.111 [2024-11-15 10:32:52.698309] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eec740, cid 0, qid 0 00:14:52.111 [2024-11-15 10:32:52.698406] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.111 [2024-11-15 10:32:52.698430] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.111 [2024-11-15 10:32:52.698438] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.111 [2024-11-15 10:32:52.698446] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eec740) on tqpair=0x1e88750 00:14:52.111 [2024-11-15 10:32:52.698459] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.111 [2024-11-15 10:32:52.698467] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.111 [2024-11-15 10:32:52.698474] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e88750) 00:14:52.111 [2024-11-15 10:32:52.698486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:52.111 [2024-11-15 10:32:52.698498] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.111 [2024-11-15 10:32:52.698505] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.111 [2024-11-15 10:32:52.698511] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1e88750) 00:14:52.111 [2024-11-15 10:32:52.698522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:52.111 [2024-11-15 10:32:52.698533] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.111 [2024-11-15 10:32:52.698540] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.111 [2024-11-15 10:32:52.698546] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1e88750) 00:14:52.111 [2024-11-15 10:32:52.698557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:52.111 [2024-11-15 10:32:52.698575] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.111 [2024-11-15 10:32:52.698582] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.111 [2024-11-15 10:32:52.698588] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e88750) 00:14:52.111 [2024-11-15 10:32:52.698598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:52.111 [2024-11-15 10:32:52.698607] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:52.111 [2024-11-15 10:32:52.698640] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:52.111 [2024-11-15 10:32:52.698655] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.111 [2024-11-15 10:32:52.698662] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e88750) 00:14:52.111 [2024-11-15 10:32:52.698674] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.111 [2024-11-15 10:32:52.698714] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eec740, cid 0, qid 0 00:14:52.111 [2024-11-15 10:32:52.698734] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eec8c0, cid 1, qid 0 00:14:52.111 [2024-11-15 10:32:52.698742] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eeca40, cid 2, qid 0 00:14:52.111 [2024-11-15 10:32:52.698752] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eecbc0, cid 3, qid 0 00:14:52.111 [2024-11-15 10:32:52.698761] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eecd40, cid 4, qid 0 00:14:52.111 [2024-11-15 10:32:52.698886] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.111 [2024-11-15 10:32:52.698909] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.111 [2024-11-15 10:32:52.698918] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.111 [2024-11-15 10:32:52.698926] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eecd40) on tqpair=0x1e88750 00:14:52.111 [2024-11-15 10:32:52.698936] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:14:52.111 [2024-11-15 10:32:52.698946] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:14:52.111 [2024-11-15 10:32:52.698967] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.111 [2024-11-15 10:32:52.698977] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e88750) 00:14:52.111 [2024-11-15 10:32:52.698990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.111 [2024-11-15 10:32:52.699030] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eecd40, cid 4, qid 0 00:14:52.111 [2024-11-15 10:32:52.699137] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:52.111 [2024-11-15 10:32:52.699158] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:52.111 [2024-11-15 10:32:52.699169] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:52.111 [2024-11-15 10:32:52.699177] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e88750): datao=0, datal=4096, cccid=4 00:14:52.111 [2024-11-15 10:32:52.699185] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1eecd40) on tqpair(0x1e88750): expected_datao=0, payload_size=4096 00:14:52.111 [2024-11-15 10:32:52.699193] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.111 [2024-11-15 10:32:52.699205] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:52.111 [2024-11-15 10:32:52.699213] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:52.111 [2024-11-15 10:32:52.699227] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.111 [2024-11-15 10:32:52.699238] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.111 [2024-11-15 10:32:52.699244] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.111 [2024-11-15 10:32:52.699251] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eecd40) on tqpair=0x1e88750 00:14:52.111 [2024-11-15 10:32:52.699277] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:14:52.111 [2024-11-15 10:32:52.699332] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.111 [2024-11-15 10:32:52.699351] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e88750) 00:14:52.111 [2024-11-15 10:32:52.699366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.111 [2024-11-15 10:32:52.699381] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.111 [2024-11-15 10:32:52.699389] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.111 [2024-11-15 10:32:52.699395] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e88750) 00:14:52.111 [2024-11-15 10:32:52.699406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:14:52.111 [2024-11-15 10:32:52.699458] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eecd40, cid 4, qid 0 00:14:52.112 [2024-11-15 10:32:52.699481] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eecec0, cid 5, qid 0 00:14:52.112 [2024-11-15 10:32:52.699668] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:52.112 [2024-11-15 10:32:52.699692] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:52.112 [2024-11-15 10:32:52.699702] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:52.112 [2024-11-15 10:32:52.699709] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e88750): datao=0, datal=1024, cccid=4 00:14:52.112 [2024-11-15 10:32:52.699718] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1eecd40) on tqpair(0x1e88750): expected_datao=0, payload_size=1024 00:14:52.112 [2024-11-15 10:32:52.699727] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.112 [2024-11-15 10:32:52.699739] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:52.112 [2024-11-15 10:32:52.699746] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:52.112 [2024-11-15 10:32:52.699755] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.112 [2024-11-15 10:32:52.699764] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.112 [2024-11-15 10:32:52.699770] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.112 [2024-11-15 10:32:52.699777] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eecec0) on tqpair=0x1e88750 00:14:52.112 [2024-11-15 10:32:52.699812] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.112 [2024-11-15 10:32:52.699828] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.112 [2024-11-15 10:32:52.699835] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.112 [2024-11-15 10:32:52.699844] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eecd40) on tqpair=0x1e88750 00:14:52.112 [2024-11-15 10:32:52.699867] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.112 [2024-11-15 10:32:52.699878] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e88750) 00:14:52.112 [2024-11-15 10:32:52.699893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.112 [2024-11-15 10:32:52.699941] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eecd40, cid 4, qid 0 00:14:52.112 [2024-11-15 10:32:52.700045] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:52.112 [2024-11-15 10:32:52.704107] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:52.112 [2024-11-15 10:32:52.704133] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:52.112 [2024-11-15 10:32:52.704140] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e88750): datao=0, datal=3072, cccid=4 00:14:52.112 [2024-11-15 10:32:52.704149] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1eecd40) on tqpair(0x1e88750): expected_datao=0, payload_size=3072 00:14:52.112 [2024-11-15 10:32:52.704157] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.112 [2024-11-15 10:32:52.704171] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:52.112 [2024-11-15 10:32:52.704178] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:52.112 [2024-11-15 10:32:52.704198] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.112 [2024-11-15 10:32:52.704210] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.112 [2024-11-15 10:32:52.704217] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.112 [2024-11-15 10:32:52.704225] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eecd40) on tqpair=0x1e88750 00:14:52.112 [2024-11-15 10:32:52.704247] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.112 [2024-11-15 10:32:52.704257] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e88750) 00:14:52.112 [2024-11-15 10:32:52.704272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.112 [2024-11-15 10:32:52.704328] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eecd40, cid 4, qid 0 00:14:52.112 [2024-11-15 10:32:52.704444] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:52.112 [2024-11-15 10:32:52.704468] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:52.112 [2024-11-15 10:32:52.704478] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:52.112 [2024-11-15 10:32:52.704485] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e88750): datao=0, datal=8, cccid=4 00:14:52.112 [2024-11-15 10:32:52.704493] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1eecd40) on tqpair(0x1e88750): expected_datao=0, payload_size=8 00:14:52.112 [2024-11-15 10:32:52.704501] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.112 [2024-11-15 10:32:52.704512] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:52.112 [2024-11-15 10:32:52.704519] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:52.112 [2024-11-15 10:32:52.704551] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.112 [2024-11-15 10:32:52.704563] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.112 [2024-11-15 10:32:52.704570] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.112 [2024-11-15 10:32:52.704577] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eecd40) on tqpair=0x1e88750 00:14:52.112 ===================================================== 00:14:52.112 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:14:52.112 ===================================================== 00:14:52.112 Controller Capabilities/Features 00:14:52.112 ================================ 00:14:52.112 Vendor ID: 0000 00:14:52.112 Subsystem Vendor ID: 0000 00:14:52.112 Serial Number: .................... 00:14:52.112 Model Number: ........................................ 00:14:52.112 Firmware Version: 25.01 00:14:52.112 Recommended Arb Burst: 0 00:14:52.112 IEEE OUI Identifier: 00 00 00 00:14:52.112 Multi-path I/O 00:14:52.112 May have multiple subsystem ports: No 00:14:52.112 May have multiple controllers: No 00:14:52.112 Associated with SR-IOV VF: No 00:14:52.112 Max Data Transfer Size: 131072 00:14:52.112 Max Number of Namespaces: 0 00:14:52.112 Max Number of I/O Queues: 1024 00:14:52.112 NVMe Specification Version (VS): 1.3 00:14:52.112 NVMe Specification Version (Identify): 1.3 00:14:52.112 Maximum Queue Entries: 128 00:14:52.112 Contiguous Queues Required: Yes 00:14:52.112 Arbitration Mechanisms Supported 00:14:52.112 Weighted Round Robin: Not Supported 00:14:52.112 Vendor Specific: Not Supported 00:14:52.112 Reset Timeout: 15000 ms 00:14:52.112 Doorbell Stride: 4 bytes 00:14:52.112 NVM Subsystem Reset: Not Supported 00:14:52.112 Command Sets Supported 00:14:52.112 NVM Command Set: Supported 00:14:52.112 Boot Partition: Not Supported 00:14:52.112 Memory Page Size Minimum: 4096 bytes 00:14:52.112 Memory Page Size Maximum: 4096 bytes 00:14:52.112 Persistent Memory Region: Not Supported 00:14:52.112 Optional Asynchronous Events Supported 00:14:52.112 Namespace Attribute Notices: Not Supported 00:14:52.112 Firmware Activation Notices: Not Supported 00:14:52.112 ANA Change Notices: Not Supported 00:14:52.112 PLE Aggregate Log Change Notices: Not Supported 00:14:52.112 LBA Status Info Alert Notices: Not Supported 00:14:52.112 EGE Aggregate Log Change Notices: Not Supported 00:14:52.112 Normal NVM Subsystem Shutdown event: Not Supported 00:14:52.112 Zone Descriptor Change Notices: Not Supported 00:14:52.112 Discovery Log Change Notices: Supported 00:14:52.112 Controller Attributes 00:14:52.112 128-bit Host Identifier: Not Supported 00:14:52.112 Non-Operational Permissive Mode: Not Supported 00:14:52.112 NVM Sets: Not Supported 00:14:52.112 Read Recovery Levels: Not Supported 00:14:52.112 Endurance Groups: Not Supported 00:14:52.112 Predictable Latency Mode: Not Supported 00:14:52.112 Traffic Based Keep ALive: Not Supported 00:14:52.112 Namespace Granularity: Not Supported 00:14:52.112 SQ Associations: Not Supported 00:14:52.112 UUID List: Not Supported 00:14:52.112 Multi-Domain Subsystem: Not Supported 00:14:52.112 Fixed Capacity Management: Not Supported 00:14:52.112 Variable Capacity Management: Not Supported 00:14:52.112 Delete Endurance Group: Not Supported 00:14:52.112 Delete NVM Set: Not Supported 00:14:52.112 Extended LBA Formats Supported: Not Supported 00:14:52.112 Flexible Data Placement Supported: Not Supported 00:14:52.112 00:14:52.112 Controller Memory Buffer Support 00:14:52.112 ================================ 00:14:52.112 Supported: No 00:14:52.112 00:14:52.112 Persistent Memory Region Support 00:14:52.112 ================================ 00:14:52.112 Supported: No 00:14:52.112 00:14:52.112 Admin Command Set Attributes 00:14:52.112 ============================ 00:14:52.112 Security Send/Receive: Not Supported 00:14:52.112 Format NVM: Not Supported 00:14:52.112 Firmware Activate/Download: Not Supported 00:14:52.112 Namespace Management: Not Supported 00:14:52.113 Device Self-Test: Not Supported 00:14:52.113 Directives: Not Supported 00:14:52.113 NVMe-MI: Not Supported 00:14:52.113 Virtualization Management: Not Supported 00:14:52.113 Doorbell Buffer Config: Not Supported 00:14:52.113 Get LBA Status Capability: Not Supported 00:14:52.113 Command & Feature Lockdown Capability: Not Supported 00:14:52.113 Abort Command Limit: 1 00:14:52.113 Async Event Request Limit: 4 00:14:52.113 Number of Firmware Slots: N/A 00:14:52.113 Firmware Slot 1 Read-Only: N/A 00:14:52.113 Firmware Activation Without Reset: N/A 00:14:52.113 Multiple Update Detection Support: N/A 00:14:52.113 Firmware Update Granularity: No Information Provided 00:14:52.113 Per-Namespace SMART Log: No 00:14:52.113 Asymmetric Namespace Access Log Page: Not Supported 00:14:52.113 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:14:52.113 Command Effects Log Page: Not Supported 00:14:52.113 Get Log Page Extended Data: Supported 00:14:52.113 Telemetry Log Pages: Not Supported 00:14:52.113 Persistent Event Log Pages: Not Supported 00:14:52.113 Supported Log Pages Log Page: May Support 00:14:52.113 Commands Supported & Effects Log Page: Not Supported 00:14:52.113 Feature Identifiers & Effects Log Page:May Support 00:14:52.113 NVMe-MI Commands & Effects Log Page: May Support 00:14:52.113 Data Area 4 for Telemetry Log: Not Supported 00:14:52.113 Error Log Page Entries Supported: 128 00:14:52.113 Keep Alive: Not Supported 00:14:52.113 00:14:52.113 NVM Command Set Attributes 00:14:52.113 ========================== 00:14:52.113 Submission Queue Entry Size 00:14:52.113 Max: 1 00:14:52.113 Min: 1 00:14:52.113 Completion Queue Entry Size 00:14:52.113 Max: 1 00:14:52.113 Min: 1 00:14:52.113 Number of Namespaces: 0 00:14:52.113 Compare Command: Not Supported 00:14:52.113 Write Uncorrectable Command: Not Supported 00:14:52.113 Dataset Management Command: Not Supported 00:14:52.113 Write Zeroes Command: Not Supported 00:14:52.113 Set Features Save Field: Not Supported 00:14:52.113 Reservations: Not Supported 00:14:52.113 Timestamp: Not Supported 00:14:52.113 Copy: Not Supported 00:14:52.113 Volatile Write Cache: Not Present 00:14:52.113 Atomic Write Unit (Normal): 1 00:14:52.113 Atomic Write Unit (PFail): 1 00:14:52.113 Atomic Compare & Write Unit: 1 00:14:52.113 Fused Compare & Write: Supported 00:14:52.113 Scatter-Gather List 00:14:52.113 SGL Command Set: Supported 00:14:52.113 SGL Keyed: Supported 00:14:52.113 SGL Bit Bucket Descriptor: Not Supported 00:14:52.113 SGL Metadata Pointer: Not Supported 00:14:52.113 Oversized SGL: Not Supported 00:14:52.113 SGL Metadata Address: Not Supported 00:14:52.113 SGL Offset: Supported 00:14:52.113 Transport SGL Data Block: Not Supported 00:14:52.113 Replay Protected Memory Block: Not Supported 00:14:52.113 00:14:52.113 Firmware Slot Information 00:14:52.113 ========================= 00:14:52.113 Active slot: 0 00:14:52.113 00:14:52.113 00:14:52.113 Error Log 00:14:52.113 ========= 00:14:52.113 00:14:52.113 Active Namespaces 00:14:52.113 ================= 00:14:52.113 Discovery Log Page 00:14:52.113 ================== 00:14:52.113 Generation Counter: 2 00:14:52.113 Number of Records: 2 00:14:52.113 Record Format: 0 00:14:52.113 00:14:52.113 Discovery Log Entry 0 00:14:52.113 ---------------------- 00:14:52.113 Transport Type: 3 (TCP) 00:14:52.113 Address Family: 1 (IPv4) 00:14:52.113 Subsystem Type: 3 (Current Discovery Subsystem) 00:14:52.113 Entry Flags: 00:14:52.113 Duplicate Returned Information: 1 00:14:52.113 Explicit Persistent Connection Support for Discovery: 1 00:14:52.113 Transport Requirements: 00:14:52.113 Secure Channel: Not Required 00:14:52.113 Port ID: 0 (0x0000) 00:14:52.113 Controller ID: 65535 (0xffff) 00:14:52.113 Admin Max SQ Size: 128 00:14:52.113 Transport Service Identifier: 4420 00:14:52.113 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:14:52.113 Transport Address: 10.0.0.3 00:14:52.113 Discovery Log Entry 1 00:14:52.113 ---------------------- 00:14:52.113 Transport Type: 3 (TCP) 00:14:52.113 Address Family: 1 (IPv4) 00:14:52.113 Subsystem Type: 2 (NVM Subsystem) 00:14:52.113 Entry Flags: 00:14:52.113 Duplicate Returned Information: 0 00:14:52.113 Explicit Persistent Connection Support for Discovery: 0 00:14:52.113 Transport Requirements: 00:14:52.113 Secure Channel: Not Required 00:14:52.113 Port ID: 0 (0x0000) 00:14:52.113 Controller ID: 65535 (0xffff) 00:14:52.113 Admin Max SQ Size: 128 00:14:52.113 Transport Service Identifier: 4420 00:14:52.113 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:14:52.113 Transport Address: 10.0.0.3 [2024-11-15 10:32:52.704740] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:14:52.113 [2024-11-15 10:32:52.704762] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eec740) on tqpair=0x1e88750 00:14:52.113 [2024-11-15 10:32:52.704775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.113 [2024-11-15 10:32:52.704786] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eec8c0) on tqpair=0x1e88750 00:14:52.113 [2024-11-15 10:32:52.704794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.113 [2024-11-15 10:32:52.704803] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eeca40) on tqpair=0x1e88750 00:14:52.113 [2024-11-15 10:32:52.704811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.113 [2024-11-15 10:32:52.704819] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eecbc0) on tqpair=0x1e88750 00:14:52.113 [2024-11-15 10:32:52.704827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.113 [2024-11-15 10:32:52.704844] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.113 [2024-11-15 10:32:52.704853] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.113 [2024-11-15 10:32:52.704860] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e88750) 00:14:52.113 [2024-11-15 10:32:52.704873] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.113 [2024-11-15 10:32:52.704914] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eecbc0, cid 3, qid 0 00:14:52.113 [2024-11-15 10:32:52.704997] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.113 [2024-11-15 10:32:52.705011] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.113 [2024-11-15 10:32:52.705018] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.113 [2024-11-15 10:32:52.705025] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eecbc0) on tqpair=0x1e88750 00:14:52.113 [2024-11-15 10:32:52.705039] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.113 [2024-11-15 10:32:52.705069] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.113 [2024-11-15 10:32:52.705081] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e88750) 00:14:52.113 [2024-11-15 10:32:52.705094] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.113 [2024-11-15 10:32:52.705138] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eecbc0, cid 3, qid 0 00:14:52.113 [2024-11-15 10:32:52.705257] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.113 [2024-11-15 10:32:52.705274] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.113 [2024-11-15 10:32:52.705281] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.113 [2024-11-15 10:32:52.705288] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eecbc0) on tqpair=0x1e88750 00:14:52.113 [2024-11-15 10:32:52.705297] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:14:52.113 [2024-11-15 10:32:52.705306] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:14:52.113 [2024-11-15 10:32:52.705325] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.113 [2024-11-15 10:32:52.705336] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.113 [2024-11-15 10:32:52.705343] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e88750) 00:14:52.114 [2024-11-15 10:32:52.705357] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.114 [2024-11-15 10:32:52.705395] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eecbc0, cid 3, qid 0 00:14:52.114 [2024-11-15 10:32:52.705466] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.114 [2024-11-15 10:32:52.705489] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.114 [2024-11-15 10:32:52.705497] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.114 [2024-11-15 10:32:52.705504] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eecbc0) on tqpair=0x1e88750 00:14:52.114 [2024-11-15 10:32:52.705524] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.114 [2024-11-15 10:32:52.705534] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.114 [2024-11-15 10:32:52.705540] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e88750) 00:14:52.114 [2024-11-15 10:32:52.705554] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.114 [2024-11-15 10:32:52.705593] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eecbc0, cid 3, qid 0 00:14:52.114 [2024-11-15 10:32:52.705664] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.114 [2024-11-15 10:32:52.705693] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.114 [2024-11-15 10:32:52.705701] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.114 [2024-11-15 10:32:52.705709] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eecbc0) on tqpair=0x1e88750 00:14:52.114 [2024-11-15 10:32:52.705728] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.114 [2024-11-15 10:32:52.705736] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.114 [2024-11-15 10:32:52.705742] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e88750) 00:14:52.114 [2024-11-15 10:32:52.705755] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.114 [2024-11-15 10:32:52.705789] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eecbc0, cid 3, qid 0 00:14:52.114 [2024-11-15 10:32:52.705855] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.114 [2024-11-15 10:32:52.705877] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.114 [2024-11-15 10:32:52.705885] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.114 [2024-11-15 10:32:52.705892] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eecbc0) on tqpair=0x1e88750 00:14:52.114 [2024-11-15 10:32:52.705911] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.114 [2024-11-15 10:32:52.705920] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.114 [2024-11-15 10:32:52.705926] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e88750) 00:14:52.114 [2024-11-15 10:32:52.705938] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.114 [2024-11-15 10:32:52.705972] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eecbc0, cid 3, qid 0 00:14:52.114 [2024-11-15 10:32:52.706039] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.114 [2024-11-15 10:32:52.706082] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.114 [2024-11-15 10:32:52.706093] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.114 [2024-11-15 10:32:52.706101] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eecbc0) on tqpair=0x1e88750 00:14:52.114 [2024-11-15 10:32:52.706123] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.114 [2024-11-15 10:32:52.706133] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.114 [2024-11-15 10:32:52.706140] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e88750) 00:14:52.114 [2024-11-15 10:32:52.706154] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.114 [2024-11-15 10:32:52.706192] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eecbc0, cid 3, qid 0 00:14:52.114 [2024-11-15 10:32:52.706269] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.114 [2024-11-15 10:32:52.706291] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.114 [2024-11-15 10:32:52.706301] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.114 [2024-11-15 10:32:52.706308] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eecbc0) on tqpair=0x1e88750 00:14:52.114 [2024-11-15 10:32:52.706339] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.114 [2024-11-15 10:32:52.706348] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.114 [2024-11-15 10:32:52.706354] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e88750) 00:14:52.114 [2024-11-15 10:32:52.706367] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.114 [2024-11-15 10:32:52.706405] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eecbc0, cid 3, qid 0 00:14:52.114 [2024-11-15 10:32:52.706478] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.114 [2024-11-15 10:32:52.706498] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.114 [2024-11-15 10:32:52.706505] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.114 [2024-11-15 10:32:52.706512] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eecbc0) on tqpair=0x1e88750 00:14:52.114 [2024-11-15 10:32:52.706534] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.114 [2024-11-15 10:32:52.706549] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.114 [2024-11-15 10:32:52.706557] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e88750) 00:14:52.114 [2024-11-15 10:32:52.706571] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.114 [2024-11-15 10:32:52.706609] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eecbc0, cid 3, qid 0 00:14:52.114 [2024-11-15 10:32:52.706682] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.114 [2024-11-15 10:32:52.706703] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.114 [2024-11-15 10:32:52.706711] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.114 [2024-11-15 10:32:52.706718] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eecbc0) on tqpair=0x1e88750 00:14:52.114 [2024-11-15 10:32:52.706737] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.114 [2024-11-15 10:32:52.706746] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.114 [2024-11-15 10:32:52.706753] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e88750) 00:14:52.114 [2024-11-15 10:32:52.706765] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.114 [2024-11-15 10:32:52.706802] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eecbc0, cid 3, qid 0 00:14:52.114 [2024-11-15 10:32:52.706867] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.114 [2024-11-15 10:32:52.706887] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.114 [2024-11-15 10:32:52.706894] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.114 [2024-11-15 10:32:52.706901] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eecbc0) on tqpair=0x1e88750 00:14:52.114 [2024-11-15 10:32:52.706920] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.114 [2024-11-15 10:32:52.706935] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.114 [2024-11-15 10:32:52.706943] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e88750) 00:14:52.114 [2024-11-15 10:32:52.706956] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.114 [2024-11-15 10:32:52.706992] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eecbc0, cid 3, qid 0 00:14:52.114 [2024-11-15 10:32:52.707084] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.114 [2024-11-15 10:32:52.707100] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.114 [2024-11-15 10:32:52.707107] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.114 [2024-11-15 10:32:52.707114] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eecbc0) on tqpair=0x1e88750 00:14:52.114 [2024-11-15 10:32:52.707134] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.114 [2024-11-15 10:32:52.707143] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.114 [2024-11-15 10:32:52.707150] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e88750) 00:14:52.114 [2024-11-15 10:32:52.707162] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.114 [2024-11-15 10:32:52.707196] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eecbc0, cid 3, qid 0 00:14:52.115 [2024-11-15 10:32:52.707282] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.115 [2024-11-15 10:32:52.707304] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.115 [2024-11-15 10:32:52.707313] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.115 [2024-11-15 10:32:52.707320] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eecbc0) on tqpair=0x1e88750 00:14:52.115 [2024-11-15 10:32:52.707338] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.115 [2024-11-15 10:32:52.707347] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.115 [2024-11-15 10:32:52.707353] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e88750) 00:14:52.115 [2024-11-15 10:32:52.707365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.115 [2024-11-15 10:32:52.707400] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eecbc0, cid 3, qid 0 00:14:52.115 [2024-11-15 10:32:52.707482] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.115 [2024-11-15 10:32:52.707512] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.115 [2024-11-15 10:32:52.707521] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.115 [2024-11-15 10:32:52.707528] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eecbc0) on tqpair=0x1e88750 00:14:52.115 [2024-11-15 10:32:52.707550] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.115 [2024-11-15 10:32:52.707560] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.115 [2024-11-15 10:32:52.707567] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e88750) 00:14:52.115 [2024-11-15 10:32:52.707580] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.115 [2024-11-15 10:32:52.707617] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eecbc0, cid 3, qid 0 00:14:52.115 [2024-11-15 10:32:52.707698] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.115 [2024-11-15 10:32:52.707714] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.115 [2024-11-15 10:32:52.707721] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.115 [2024-11-15 10:32:52.707729] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eecbc0) on tqpair=0x1e88750 00:14:52.115 [2024-11-15 10:32:52.707747] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.115 [2024-11-15 10:32:52.707757] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.115 [2024-11-15 10:32:52.707765] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e88750) 00:14:52.115 [2024-11-15 10:32:52.707781] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.115 [2024-11-15 10:32:52.707817] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eecbc0, cid 3, qid 0 00:14:52.115 [2024-11-15 10:32:52.707897] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.115 [2024-11-15 10:32:52.707918] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.115 [2024-11-15 10:32:52.707925] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.115 [2024-11-15 10:32:52.707933] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eecbc0) on tqpair=0x1e88750 00:14:52.115 [2024-11-15 10:32:52.707950] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.115 [2024-11-15 10:32:52.707958] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.115 [2024-11-15 10:32:52.707964] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e88750) 00:14:52.115 [2024-11-15 10:32:52.707987] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.115 [2024-11-15 10:32:52.708022] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eecbc0, cid 3, qid 0 00:14:52.115 [2024-11-15 10:32:52.712089] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.115 [2024-11-15 10:32:52.712131] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.115 [2024-11-15 10:32:52.712143] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.115 [2024-11-15 10:32:52.712151] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eecbc0) on tqpair=0x1e88750 00:14:52.115 [2024-11-15 10:32:52.712179] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.115 [2024-11-15 10:32:52.712190] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.115 [2024-11-15 10:32:52.712196] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e88750) 00:14:52.115 [2024-11-15 10:32:52.712212] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.115 [2024-11-15 10:32:52.712258] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eecbc0, cid 3, qid 0 00:14:52.115 [2024-11-15 10:32:52.712340] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.115 [2024-11-15 10:32:52.712357] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.115 [2024-11-15 10:32:52.712364] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.115 [2024-11-15 10:32:52.712371] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eecbc0) on tqpair=0x1e88750 00:14:52.115 [2024-11-15 10:32:52.712387] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:14:52.115 00:14:52.115 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:14:52.115 [2024-11-15 10:32:52.768243] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:14:52.115 [2024-11-15 10:32:52.768303] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74240 ] 00:14:52.115 [2024-11-15 10:32:52.941360] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:14:52.115 [2024-11-15 10:32:52.941446] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:14:52.115 [2024-11-15 10:32:52.941454] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:14:52.115 [2024-11-15 10:32:52.941474] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:14:52.115 [2024-11-15 10:32:52.941491] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:14:52.115 [2024-11-15 10:32:52.941871] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:14:52.115 [2024-11-15 10:32:52.941947] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1975750 0 00:14:52.115 [2024-11-15 10:32:52.948082] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:14:52.115 [2024-11-15 10:32:52.948113] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:14:52.115 [2024-11-15 10:32:52.948120] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:14:52.115 [2024-11-15 10:32:52.948125] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:14:52.115 [2024-11-15 10:32:52.948159] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.115 [2024-11-15 10:32:52.948167] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.115 [2024-11-15 10:32:52.948172] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1975750) 00:14:52.115 [2024-11-15 10:32:52.948188] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:52.115 [2024-11-15 10:32:52.948224] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d9740, cid 0, qid 0 00:14:52.115 [2024-11-15 10:32:52.956073] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.115 [2024-11-15 10:32:52.956101] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.115 [2024-11-15 10:32:52.956107] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.115 [2024-11-15 10:32:52.956113] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d9740) on tqpair=0x1975750 00:14:52.115 [2024-11-15 10:32:52.956131] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:14:52.115 [2024-11-15 10:32:52.956142] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:14:52.115 [2024-11-15 10:32:52.956149] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:14:52.115 [2024-11-15 10:32:52.956170] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.115 [2024-11-15 10:32:52.956177] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.115 [2024-11-15 10:32:52.956182] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1975750) 00:14:52.115 [2024-11-15 10:32:52.956194] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.115 [2024-11-15 10:32:52.956225] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d9740, cid 0, qid 0 00:14:52.115 [2024-11-15 10:32:52.956292] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.115 [2024-11-15 10:32:52.956300] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.115 [2024-11-15 10:32:52.956304] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.116 [2024-11-15 10:32:52.956309] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d9740) on tqpair=0x1975750 00:14:52.116 [2024-11-15 10:32:52.956315] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:14:52.116 [2024-11-15 10:32:52.956324] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:14:52.116 [2024-11-15 10:32:52.956333] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.116 [2024-11-15 10:32:52.956338] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.116 [2024-11-15 10:32:52.956342] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1975750) 00:14:52.116 [2024-11-15 10:32:52.956351] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.116 [2024-11-15 10:32:52.956372] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d9740, cid 0, qid 0 00:14:52.116 [2024-11-15 10:32:52.956424] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.116 [2024-11-15 10:32:52.956431] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.116 [2024-11-15 10:32:52.956435] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.116 [2024-11-15 10:32:52.956440] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d9740) on tqpair=0x1975750 00:14:52.116 [2024-11-15 10:32:52.956447] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:14:52.116 [2024-11-15 10:32:52.956456] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:14:52.116 [2024-11-15 10:32:52.956464] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.116 [2024-11-15 10:32:52.956469] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.116 [2024-11-15 10:32:52.956474] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1975750) 00:14:52.116 [2024-11-15 10:32:52.956482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.116 [2024-11-15 10:32:52.956501] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d9740, cid 0, qid 0 00:14:52.116 [2024-11-15 10:32:52.956552] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.116 [2024-11-15 10:32:52.956559] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.116 [2024-11-15 10:32:52.956563] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.116 [2024-11-15 10:32:52.956568] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d9740) on tqpair=0x1975750 00:14:52.116 [2024-11-15 10:32:52.956574] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:52.116 [2024-11-15 10:32:52.956585] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.116 [2024-11-15 10:32:52.956591] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.116 [2024-11-15 10:32:52.956595] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1975750) 00:14:52.116 [2024-11-15 10:32:52.956603] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.116 [2024-11-15 10:32:52.956621] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d9740, cid 0, qid 0 00:14:52.116 [2024-11-15 10:32:52.956672] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.116 [2024-11-15 10:32:52.956679] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.116 [2024-11-15 10:32:52.956683] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.116 [2024-11-15 10:32:52.956688] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d9740) on tqpair=0x1975750 00:14:52.116 [2024-11-15 10:32:52.956693] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:14:52.116 [2024-11-15 10:32:52.956699] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:14:52.116 [2024-11-15 10:32:52.956708] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:52.116 [2024-11-15 10:32:52.956820] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:14:52.116 [2024-11-15 10:32:52.956827] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:52.116 [2024-11-15 10:32:52.956837] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.116 [2024-11-15 10:32:52.956842] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.116 [2024-11-15 10:32:52.956846] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1975750) 00:14:52.116 [2024-11-15 10:32:52.956854] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.116 [2024-11-15 10:32:52.956875] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d9740, cid 0, qid 0 00:14:52.116 [2024-11-15 10:32:52.956933] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.116 [2024-11-15 10:32:52.956940] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.116 [2024-11-15 10:32:52.956944] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.116 [2024-11-15 10:32:52.956949] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d9740) on tqpair=0x1975750 00:14:52.116 [2024-11-15 10:32:52.956955] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:52.116 [2024-11-15 10:32:52.956965] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.116 [2024-11-15 10:32:52.956971] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.116 [2024-11-15 10:32:52.956975] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1975750) 00:14:52.116 [2024-11-15 10:32:52.956983] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.116 [2024-11-15 10:32:52.957001] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d9740, cid 0, qid 0 00:14:52.116 [2024-11-15 10:32:52.957048] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.116 [2024-11-15 10:32:52.957071] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.116 [2024-11-15 10:32:52.957075] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.116 [2024-11-15 10:32:52.957080] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d9740) on tqpair=0x1975750 00:14:52.116 [2024-11-15 10:32:52.957085] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:52.116 [2024-11-15 10:32:52.957091] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:14:52.116 [2024-11-15 10:32:52.957101] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:14:52.116 [2024-11-15 10:32:52.957118] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:14:52.116 [2024-11-15 10:32:52.957131] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.116 [2024-11-15 10:32:52.957136] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1975750) 00:14:52.116 [2024-11-15 10:32:52.957145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.116 [2024-11-15 10:32:52.957167] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d9740, cid 0, qid 0 00:14:52.116 [2024-11-15 10:32:52.957287] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:52.116 [2024-11-15 10:32:52.957295] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:52.116 [2024-11-15 10:32:52.957299] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:52.116 [2024-11-15 10:32:52.957303] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1975750): datao=0, datal=4096, cccid=0 00:14:52.117 [2024-11-15 10:32:52.957309] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19d9740) on tqpair(0x1975750): expected_datao=0, payload_size=4096 00:14:52.117 [2024-11-15 10:32:52.957314] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.117 [2024-11-15 10:32:52.957324] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:52.117 [2024-11-15 10:32:52.957328] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:52.117 [2024-11-15 10:32:52.957338] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.117 [2024-11-15 10:32:52.957345] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.117 [2024-11-15 10:32:52.957349] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.117 [2024-11-15 10:32:52.957354] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d9740) on tqpair=0x1975750 00:14:52.117 [2024-11-15 10:32:52.957364] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:14:52.117 [2024-11-15 10:32:52.957370] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:14:52.117 [2024-11-15 10:32:52.957375] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:14:52.117 [2024-11-15 10:32:52.957380] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:14:52.117 [2024-11-15 10:32:52.957386] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:14:52.117 [2024-11-15 10:32:52.957391] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:14:52.117 [2024-11-15 10:32:52.957407] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:14:52.117 [2024-11-15 10:32:52.957416] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.117 [2024-11-15 10:32:52.957421] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.117 [2024-11-15 10:32:52.957425] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1975750) 00:14:52.117 [2024-11-15 10:32:52.957434] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:52.117 [2024-11-15 10:32:52.957455] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d9740, cid 0, qid 0 00:14:52.117 [2024-11-15 10:32:52.957511] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.117 [2024-11-15 10:32:52.957519] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.117 [2024-11-15 10:32:52.957523] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.117 [2024-11-15 10:32:52.957527] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d9740) on tqpair=0x1975750 00:14:52.117 [2024-11-15 10:32:52.957536] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.117 [2024-11-15 10:32:52.957541] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.117 [2024-11-15 10:32:52.957545] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1975750) 00:14:52.117 [2024-11-15 10:32:52.957552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:52.117 [2024-11-15 10:32:52.957560] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.117 [2024-11-15 10:32:52.957564] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.117 [2024-11-15 10:32:52.957568] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1975750) 00:14:52.117 [2024-11-15 10:32:52.957575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:52.117 [2024-11-15 10:32:52.957582] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.117 [2024-11-15 10:32:52.957586] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.117 [2024-11-15 10:32:52.957590] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1975750) 00:14:52.117 [2024-11-15 10:32:52.957597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:52.117 [2024-11-15 10:32:52.957604] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.117 [2024-11-15 10:32:52.957608] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.117 [2024-11-15 10:32:52.957612] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1975750) 00:14:52.117 [2024-11-15 10:32:52.957618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:52.117 [2024-11-15 10:32:52.957624] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:52.117 [2024-11-15 10:32:52.957639] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:52.117 [2024-11-15 10:32:52.957648] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.117 [2024-11-15 10:32:52.957652] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1975750) 00:14:52.117 [2024-11-15 10:32:52.957660] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.117 [2024-11-15 10:32:52.957681] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d9740, cid 0, qid 0 00:14:52.117 [2024-11-15 10:32:52.957689] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d98c0, cid 1, qid 0 00:14:52.117 [2024-11-15 10:32:52.957695] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d9a40, cid 2, qid 0 00:14:52.117 [2024-11-15 10:32:52.957700] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d9bc0, cid 3, qid 0 00:14:52.117 [2024-11-15 10:32:52.957705] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d9d40, cid 4, qid 0 00:14:52.117 [2024-11-15 10:32:52.957802] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.117 [2024-11-15 10:32:52.957810] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.117 [2024-11-15 10:32:52.957814] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.117 [2024-11-15 10:32:52.957818] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d9d40) on tqpair=0x1975750 00:14:52.117 [2024-11-15 10:32:52.957824] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:14:52.117 [2024-11-15 10:32:52.957830] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:52.117 [2024-11-15 10:32:52.957840] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:14:52.117 [2024-11-15 10:32:52.957851] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:52.117 [2024-11-15 10:32:52.957859] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.117 [2024-11-15 10:32:52.957864] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.117 [2024-11-15 10:32:52.957868] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1975750) 00:14:52.117 [2024-11-15 10:32:52.957876] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:52.117 [2024-11-15 10:32:52.957895] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d9d40, cid 4, qid 0 00:14:52.117 [2024-11-15 10:32:52.957951] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.117 [2024-11-15 10:32:52.957958] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.117 [2024-11-15 10:32:52.957962] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.117 [2024-11-15 10:32:52.957967] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d9d40) on tqpair=0x1975750 00:14:52.380 [2024-11-15 10:32:52.958035] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:14:52.380 [2024-11-15 10:32:52.958048] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:52.380 [2024-11-15 10:32:52.958077] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.380 [2024-11-15 10:32:52.958082] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1975750) 00:14:52.380 [2024-11-15 10:32:52.958091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.380 [2024-11-15 10:32:52.958112] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d9d40, cid 4, qid 0 00:14:52.380 [2024-11-15 10:32:52.958196] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:52.380 [2024-11-15 10:32:52.958203] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:52.380 [2024-11-15 10:32:52.958207] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:52.380 [2024-11-15 10:32:52.958212] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1975750): datao=0, datal=4096, cccid=4 00:14:52.380 [2024-11-15 10:32:52.958217] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19d9d40) on tqpair(0x1975750): expected_datao=0, payload_size=4096 00:14:52.380 [2024-11-15 10:32:52.958222] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.380 [2024-11-15 10:32:52.958230] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:52.380 [2024-11-15 10:32:52.958235] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:52.380 [2024-11-15 10:32:52.958244] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.380 [2024-11-15 10:32:52.958251] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.380 [2024-11-15 10:32:52.958255] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.380 [2024-11-15 10:32:52.958259] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d9d40) on tqpair=0x1975750 00:14:52.380 [2024-11-15 10:32:52.958277] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:14:52.380 [2024-11-15 10:32:52.958289] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:14:52.380 [2024-11-15 10:32:52.958301] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:14:52.380 [2024-11-15 10:32:52.958310] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.380 [2024-11-15 10:32:52.958314] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1975750) 00:14:52.380 [2024-11-15 10:32:52.958322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.380 [2024-11-15 10:32:52.958343] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d9d40, cid 4, qid 0 00:14:52.380 [2024-11-15 10:32:52.958470] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:52.380 [2024-11-15 10:32:52.958477] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:52.380 [2024-11-15 10:32:52.958481] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:52.380 [2024-11-15 10:32:52.958485] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1975750): datao=0, datal=4096, cccid=4 00:14:52.380 [2024-11-15 10:32:52.958490] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19d9d40) on tqpair(0x1975750): expected_datao=0, payload_size=4096 00:14:52.380 [2024-11-15 10:32:52.958495] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.380 [2024-11-15 10:32:52.958503] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:52.380 [2024-11-15 10:32:52.958507] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:52.380 [2024-11-15 10:32:52.958516] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.380 [2024-11-15 10:32:52.958523] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.380 [2024-11-15 10:32:52.958527] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.380 [2024-11-15 10:32:52.958531] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d9d40) on tqpair=0x1975750 00:14:52.380 [2024-11-15 10:32:52.958552] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:52.380 [2024-11-15 10:32:52.958564] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:52.380 [2024-11-15 10:32:52.958574] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.380 [2024-11-15 10:32:52.958578] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1975750) 00:14:52.380 [2024-11-15 10:32:52.958586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.380 [2024-11-15 10:32:52.958607] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d9d40, cid 4, qid 0 00:14:52.380 [2024-11-15 10:32:52.958683] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:52.380 [2024-11-15 10:32:52.958691] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:52.380 [2024-11-15 10:32:52.958695] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:52.380 [2024-11-15 10:32:52.958699] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1975750): datao=0, datal=4096, cccid=4 00:14:52.380 [2024-11-15 10:32:52.958704] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19d9d40) on tqpair(0x1975750): expected_datao=0, payload_size=4096 00:14:52.380 [2024-11-15 10:32:52.958709] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.380 [2024-11-15 10:32:52.958716] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:52.380 [2024-11-15 10:32:52.958721] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:52.380 [2024-11-15 10:32:52.958730] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.380 [2024-11-15 10:32:52.958736] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.381 [2024-11-15 10:32:52.958740] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.381 [2024-11-15 10:32:52.958744] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d9d40) on tqpair=0x1975750 00:14:52.381 [2024-11-15 10:32:52.958754] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:52.381 [2024-11-15 10:32:52.958763] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:14:52.381 [2024-11-15 10:32:52.958775] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:14:52.381 [2024-11-15 10:32:52.958783] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:14:52.381 [2024-11-15 10:32:52.958789] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:52.381 [2024-11-15 10:32:52.958794] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:14:52.381 [2024-11-15 10:32:52.958800] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:14:52.381 [2024-11-15 10:32:52.958806] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:14:52.381 [2024-11-15 10:32:52.958812] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:14:52.381 [2024-11-15 10:32:52.958832] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.381 [2024-11-15 10:32:52.958837] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1975750) 00:14:52.381 [2024-11-15 10:32:52.958845] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.381 [2024-11-15 10:32:52.958853] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.381 [2024-11-15 10:32:52.958858] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.381 [2024-11-15 10:32:52.958862] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1975750) 00:14:52.381 [2024-11-15 10:32:52.958869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:14:52.381 [2024-11-15 10:32:52.958901] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d9d40, cid 4, qid 0 00:14:52.381 [2024-11-15 10:32:52.958909] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d9ec0, cid 5, qid 0 00:14:52.381 [2024-11-15 10:32:52.958979] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.381 [2024-11-15 10:32:52.958986] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.381 [2024-11-15 10:32:52.958990] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.381 [2024-11-15 10:32:52.958994] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d9d40) on tqpair=0x1975750 00:14:52.381 [2024-11-15 10:32:52.959002] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.381 [2024-11-15 10:32:52.959008] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.381 [2024-11-15 10:32:52.959012] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.381 [2024-11-15 10:32:52.959016] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d9ec0) on tqpair=0x1975750 00:14:52.381 [2024-11-15 10:32:52.959028] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.381 [2024-11-15 10:32:52.959033] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1975750) 00:14:52.381 [2024-11-15 10:32:52.959040] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.381 [2024-11-15 10:32:52.959074] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d9ec0, cid 5, qid 0 00:14:52.381 [2024-11-15 10:32:52.959130] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.381 [2024-11-15 10:32:52.959137] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.381 [2024-11-15 10:32:52.959141] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.381 [2024-11-15 10:32:52.959146] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d9ec0) on tqpair=0x1975750 00:14:52.381 [2024-11-15 10:32:52.959157] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.381 [2024-11-15 10:32:52.959162] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1975750) 00:14:52.381 [2024-11-15 10:32:52.959169] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.381 [2024-11-15 10:32:52.959188] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d9ec0, cid 5, qid 0 00:14:52.381 [2024-11-15 10:32:52.959247] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.381 [2024-11-15 10:32:52.959254] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.381 [2024-11-15 10:32:52.959258] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.381 [2024-11-15 10:32:52.959262] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d9ec0) on tqpair=0x1975750 00:14:52.381 [2024-11-15 10:32:52.959273] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.381 [2024-11-15 10:32:52.959278] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1975750) 00:14:52.381 [2024-11-15 10:32:52.959285] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.381 [2024-11-15 10:32:52.959302] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d9ec0, cid 5, qid 0 00:14:52.381 [2024-11-15 10:32:52.959357] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.381 [2024-11-15 10:32:52.959365] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.381 [2024-11-15 10:32:52.959369] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.381 [2024-11-15 10:32:52.959373] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d9ec0) on tqpair=0x1975750 00:14:52.381 [2024-11-15 10:32:52.959394] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.381 [2024-11-15 10:32:52.959400] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1975750) 00:14:52.381 [2024-11-15 10:32:52.959408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.381 [2024-11-15 10:32:52.959417] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.381 [2024-11-15 10:32:52.959422] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1975750) 00:14:52.381 [2024-11-15 10:32:52.959428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.381 [2024-11-15 10:32:52.959437] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.381 [2024-11-15 10:32:52.959442] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1975750) 00:14:52.381 [2024-11-15 10:32:52.959448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.381 [2024-11-15 10:32:52.959458] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.381 [2024-11-15 10:32:52.959462] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1975750) 00:14:52.381 [2024-11-15 10:32:52.959469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.381 [2024-11-15 10:32:52.959490] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d9ec0, cid 5, qid 0 00:14:52.381 [2024-11-15 10:32:52.959510] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d9d40, cid 4, qid 0 00:14:52.381 [2024-11-15 10:32:52.959516] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19da040, cid 6, qid 0 00:14:52.381 [2024-11-15 10:32:52.959521] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19da1c0, cid 7, qid 0 00:14:52.381 [2024-11-15 10:32:52.959684] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:52.381 [2024-11-15 10:32:52.959692] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:52.381 [2024-11-15 10:32:52.959697] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:52.382 [2024-11-15 10:32:52.959701] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1975750): datao=0, datal=8192, cccid=5 00:14:52.382 [2024-11-15 10:32:52.959706] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19d9ec0) on tqpair(0x1975750): expected_datao=0, payload_size=8192 00:14:52.382 [2024-11-15 10:32:52.959711] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.382 [2024-11-15 10:32:52.959730] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:52.382 [2024-11-15 10:32:52.959735] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:52.382 [2024-11-15 10:32:52.959741] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:52.382 [2024-11-15 10:32:52.959747] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:52.382 [2024-11-15 10:32:52.959751] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:52.382 [2024-11-15 10:32:52.959755] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1975750): datao=0, datal=512, cccid=4 00:14:52.382 [2024-11-15 10:32:52.959760] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19d9d40) on tqpair(0x1975750): expected_datao=0, payload_size=512 00:14:52.382 [2024-11-15 10:32:52.959765] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.382 [2024-11-15 10:32:52.959772] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:52.382 [2024-11-15 10:32:52.959776] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:52.382 [2024-11-15 10:32:52.959782] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:52.382 [2024-11-15 10:32:52.959789] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:52.382 [2024-11-15 10:32:52.959792] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:52.382 [2024-11-15 10:32:52.959796] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1975750): datao=0, datal=512, cccid=6 00:14:52.382 [2024-11-15 10:32:52.959801] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19da040) on tqpair(0x1975750): expected_datao=0, payload_size=512 00:14:52.382 [2024-11-15 10:32:52.959806] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.382 [2024-11-15 10:32:52.959813] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:52.382 [2024-11-15 10:32:52.959817] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:52.382 [2024-11-15 10:32:52.959823] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:52.382 [2024-11-15 10:32:52.959829] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:52.382 [2024-11-15 10:32:52.959833] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:52.382 [2024-11-15 10:32:52.959837] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1975750): datao=0, datal=4096, cccid=7 00:14:52.382 [2024-11-15 10:32:52.959842] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19da1c0) on tqpair(0x1975750): expected_datao=0, payload_size=4096 00:14:52.382 [2024-11-15 10:32:52.959847] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.382 [2024-11-15 10:32:52.959861] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:52.382 [2024-11-15 10:32:52.959865] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:52.382 [2024-11-15 10:32:52.959874] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.382 [2024-11-15 10:32:52.959880] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.382 [2024-11-15 10:32:52.959884] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.382 [2024-11-15 10:32:52.959888] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d9ec0) on tqpair=0x1975750 00:14:52.382 [2024-11-15 10:32:52.959906] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.382 [2024-11-15 10:32:52.959913] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.382 [2024-11-15 10:32:52.959917] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.382 [2024-11-15 10:32:52.959921] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d9d40) on tqpair=0x1975750 00:14:52.382 [2024-11-15 10:32:52.959935] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.382 [2024-11-15 10:32:52.959941] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.382 [2024-11-15 10:32:52.959945] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.382 [2024-11-15 10:32:52.959950] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19da040) on tqpair=0x1975750 00:14:52.382 [2024-11-15 10:32:52.959957] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.382 [2024-11-15 10:32:52.959964] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.382 [2024-11-15 10:32:52.959968] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.382 ===================================================== 00:14:52.382 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:52.382 ===================================================== 00:14:52.382 Controller Capabilities/Features 00:14:52.382 ================================ 00:14:52.382 Vendor ID: 8086 00:14:52.382 Subsystem Vendor ID: 8086 00:14:52.382 Serial Number: SPDK00000000000001 00:14:52.382 Model Number: SPDK bdev Controller 00:14:52.382 Firmware Version: 25.01 00:14:52.382 Recommended Arb Burst: 6 00:14:52.382 IEEE OUI Identifier: e4 d2 5c 00:14:52.382 Multi-path I/O 00:14:52.382 May have multiple subsystem ports: Yes 00:14:52.382 May have multiple controllers: Yes 00:14:52.382 Associated with SR-IOV VF: No 00:14:52.382 Max Data Transfer Size: 131072 00:14:52.382 Max Number of Namespaces: 32 00:14:52.382 Max Number of I/O Queues: 127 00:14:52.382 NVMe Specification Version (VS): 1.3 00:14:52.382 NVMe Specification Version (Identify): 1.3 00:14:52.382 Maximum Queue Entries: 128 00:14:52.382 Contiguous Queues Required: Yes 00:14:52.382 Arbitration Mechanisms Supported 00:14:52.382 Weighted Round Robin: Not Supported 00:14:52.382 Vendor Specific: Not Supported 00:14:52.382 Reset Timeout: 15000 ms 00:14:52.382 Doorbell Stride: 4 bytes 00:14:52.382 NVM Subsystem Reset: Not Supported 00:14:52.382 Command Sets Supported 00:14:52.382 NVM Command Set: Supported 00:14:52.382 Boot Partition: Not Supported 00:14:52.382 Memory Page Size Minimum: 4096 bytes 00:14:52.382 Memory Page Size Maximum: 4096 bytes 00:14:52.382 Persistent Memory Region: Not Supported 00:14:52.382 Optional Asynchronous Events Supported 00:14:52.382 Namespace Attribute Notices: Supported 00:14:52.382 Firmware Activation Notices: Not Supported 00:14:52.382 ANA Change Notices: Not Supported 00:14:52.382 PLE Aggregate Log Change Notices: Not Supported 00:14:52.382 LBA Status Info Alert Notices: Not Supported 00:14:52.382 EGE Aggregate Log Change Notices: Not Supported 00:14:52.382 Normal NVM Subsystem Shutdown event: Not Supported 00:14:52.383 Zone Descriptor Change Notices: Not Supported 00:14:52.383 Discovery Log Change Notices: Not Supported 00:14:52.383 Controller Attributes 00:14:52.383 128-bit Host Identifier: Supported 00:14:52.383 Non-Operational Permissive Mode: Not Supported 00:14:52.383 NVM Sets: Not Supported 00:14:52.383 Read Recovery Levels: Not Supported 00:14:52.383 Endurance Groups: Not Supported 00:14:52.383 Predictable Latency Mode: Not Supported 00:14:52.383 Traffic Based Keep ALive: Not Supported 00:14:52.383 Namespace Granularity: Not Supported 00:14:52.383 SQ Associations: Not Supported 00:14:52.383 UUID List: Not Supported 00:14:52.383 Multi-Domain Subsystem: Not Supported 00:14:52.383 Fixed Capacity Management: Not Supported 00:14:52.383 Variable Capacity Management: Not Supported 00:14:52.383 Delete Endurance Group: Not Supported 00:14:52.383 Delete NVM Set: Not Supported 00:14:52.383 Extended LBA Formats Supported: Not Supported 00:14:52.383 Flexible Data Placement Supported: Not Supported 00:14:52.383 00:14:52.383 Controller Memory Buffer Support 00:14:52.383 ================================ 00:14:52.383 Supported: No 00:14:52.383 00:14:52.383 Persistent Memory Region Support 00:14:52.383 ================================ 00:14:52.383 Supported: No 00:14:52.383 00:14:52.383 Admin Command Set Attributes 00:14:52.383 ============================ 00:14:52.383 Security Send/Receive: Not Supported 00:14:52.383 Format NVM: Not Supported 00:14:52.383 Firmware Activate/Download: Not Supported 00:14:52.383 Namespace Management: Not Supported 00:14:52.383 Device Self-Test: Not Supported 00:14:52.383 Directives: Not Supported 00:14:52.383 NVMe-MI: Not Supported 00:14:52.383 Virtualization Management: Not Supported 00:14:52.383 Doorbell Buffer Config: Not Supported 00:14:52.383 Get LBA Status Capability: Not Supported 00:14:52.383 Command & Feature Lockdown Capability: Not Supported 00:14:52.383 Abort Command Limit: 4 00:14:52.383 Async Event Request Limit: 4 00:14:52.383 Number of Firmware Slots: N/A 00:14:52.383 Firmware Slot 1 Read-Only: N/A 00:14:52.383 Firmware Activation Without Reset: [2024-11-15 10:32:52.959972] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19da1c0) on tqpair=0x1975750 00:14:52.383 N/A 00:14:52.383 Multiple Update Detection Support: N/A 00:14:52.383 Firmware Update Granularity: No Information Provided 00:14:52.383 Per-Namespace SMART Log: No 00:14:52.383 Asymmetric Namespace Access Log Page: Not Supported 00:14:52.383 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:14:52.383 Command Effects Log Page: Supported 00:14:52.383 Get Log Page Extended Data: Supported 00:14:52.383 Telemetry Log Pages: Not Supported 00:14:52.383 Persistent Event Log Pages: Not Supported 00:14:52.383 Supported Log Pages Log Page: May Support 00:14:52.383 Commands Supported & Effects Log Page: Not Supported 00:14:52.383 Feature Identifiers & Effects Log Page:May Support 00:14:52.383 NVMe-MI Commands & Effects Log Page: May Support 00:14:52.383 Data Area 4 for Telemetry Log: Not Supported 00:14:52.383 Error Log Page Entries Supported: 128 00:14:52.383 Keep Alive: Supported 00:14:52.383 Keep Alive Granularity: 10000 ms 00:14:52.383 00:14:52.383 NVM Command Set Attributes 00:14:52.383 ========================== 00:14:52.383 Submission Queue Entry Size 00:14:52.383 Max: 64 00:14:52.383 Min: 64 00:14:52.383 Completion Queue Entry Size 00:14:52.383 Max: 16 00:14:52.383 Min: 16 00:14:52.383 Number of Namespaces: 32 00:14:52.383 Compare Command: Supported 00:14:52.383 Write Uncorrectable Command: Not Supported 00:14:52.383 Dataset Management Command: Supported 00:14:52.383 Write Zeroes Command: Supported 00:14:52.383 Set Features Save Field: Not Supported 00:14:52.383 Reservations: Supported 00:14:52.383 Timestamp: Not Supported 00:14:52.383 Copy: Supported 00:14:52.383 Volatile Write Cache: Present 00:14:52.383 Atomic Write Unit (Normal): 1 00:14:52.383 Atomic Write Unit (PFail): 1 00:14:52.383 Atomic Compare & Write Unit: 1 00:14:52.383 Fused Compare & Write: Supported 00:14:52.383 Scatter-Gather List 00:14:52.383 SGL Command Set: Supported 00:14:52.383 SGL Keyed: Supported 00:14:52.383 SGL Bit Bucket Descriptor: Not Supported 00:14:52.383 SGL Metadata Pointer: Not Supported 00:14:52.383 Oversized SGL: Not Supported 00:14:52.383 SGL Metadata Address: Not Supported 00:14:52.383 SGL Offset: Supported 00:14:52.383 Transport SGL Data Block: Not Supported 00:14:52.383 Replay Protected Memory Block: Not Supported 00:14:52.383 00:14:52.383 Firmware Slot Information 00:14:52.383 ========================= 00:14:52.383 Active slot: 1 00:14:52.383 Slot 1 Firmware Revision: 25.01 00:14:52.383 00:14:52.383 00:14:52.383 Commands Supported and Effects 00:14:52.383 ============================== 00:14:52.383 Admin Commands 00:14:52.383 -------------- 00:14:52.383 Get Log Page (02h): Supported 00:14:52.383 Identify (06h): Supported 00:14:52.383 Abort (08h): Supported 00:14:52.383 Set Features (09h): Supported 00:14:52.383 Get Features (0Ah): Supported 00:14:52.383 Asynchronous Event Request (0Ch): Supported 00:14:52.383 Keep Alive (18h): Supported 00:14:52.383 I/O Commands 00:14:52.383 ------------ 00:14:52.383 Flush (00h): Supported LBA-Change 00:14:52.383 Write (01h): Supported LBA-Change 00:14:52.383 Read (02h): Supported 00:14:52.383 Compare (05h): Supported 00:14:52.383 Write Zeroes (08h): Supported LBA-Change 00:14:52.383 Dataset Management (09h): Supported LBA-Change 00:14:52.383 Copy (19h): Supported LBA-Change 00:14:52.383 00:14:52.383 Error Log 00:14:52.383 ========= 00:14:52.383 00:14:52.383 Arbitration 00:14:52.383 =========== 00:14:52.383 Arbitration Burst: 1 00:14:52.383 00:14:52.383 Power Management 00:14:52.383 ================ 00:14:52.383 Number of Power States: 1 00:14:52.383 Current Power State: Power State #0 00:14:52.383 Power State #0: 00:14:52.383 Max Power: 0.00 W 00:14:52.383 Non-Operational State: Operational 00:14:52.383 Entry Latency: Not Reported 00:14:52.383 Exit Latency: Not Reported 00:14:52.383 Relative Read Throughput: 0 00:14:52.383 Relative Read Latency: 0 00:14:52.383 Relative Write Throughput: 0 00:14:52.383 Relative Write Latency: 0 00:14:52.383 Idle Power: Not Reported 00:14:52.383 Active Power: Not Reported 00:14:52.383 Non-Operational Permissive Mode: Not Supported 00:14:52.383 00:14:52.383 Health Information 00:14:52.383 ================== 00:14:52.383 Critical Warnings: 00:14:52.383 Available Spare Space: OK 00:14:52.383 Temperature: OK 00:14:52.383 Device Reliability: OK 00:14:52.383 Read Only: No 00:14:52.383 Volatile Memory Backup: OK 00:14:52.383 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:52.383 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:52.383 Available Spare: 0% 00:14:52.383 Available Spare Threshold: 0% 00:14:52.383 Life Percentage Used:[2024-11-15 10:32:52.964121] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.383 [2024-11-15 10:32:52.964133] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1975750) 00:14:52.383 [2024-11-15 10:32:52.964143] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.383 [2024-11-15 10:32:52.964173] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19da1c0, cid 7, qid 0 00:14:52.383 [2024-11-15 10:32:52.964240] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.384 [2024-11-15 10:32:52.964249] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.384 [2024-11-15 10:32:52.964253] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.384 [2024-11-15 10:32:52.964258] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19da1c0) on tqpair=0x1975750 00:14:52.384 [2024-11-15 10:32:52.964302] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:14:52.384 [2024-11-15 10:32:52.964316] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d9740) on tqpair=0x1975750 00:14:52.384 [2024-11-15 10:32:52.964324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.384 [2024-11-15 10:32:52.964331] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d98c0) on tqpair=0x1975750 00:14:52.384 [2024-11-15 10:32:52.964336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.384 [2024-11-15 10:32:52.964341] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d9a40) on tqpair=0x1975750 00:14:52.384 [2024-11-15 10:32:52.964347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.384 [2024-11-15 10:32:52.964352] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d9bc0) on tqpair=0x1975750 00:14:52.384 [2024-11-15 10:32:52.964357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.384 [2024-11-15 10:32:52.964368] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.384 [2024-11-15 10:32:52.964374] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.384 [2024-11-15 10:32:52.964378] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1975750) 00:14:52.384 [2024-11-15 10:32:52.964386] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.384 [2024-11-15 10:32:52.964411] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d9bc0, cid 3, qid 0 00:14:52.384 [2024-11-15 10:32:52.964466] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.384 [2024-11-15 10:32:52.964473] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.384 [2024-11-15 10:32:52.964477] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.384 [2024-11-15 10:32:52.964482] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d9bc0) on tqpair=0x1975750 00:14:52.384 [2024-11-15 10:32:52.964491] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.384 [2024-11-15 10:32:52.964496] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.384 [2024-11-15 10:32:52.964500] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1975750) 00:14:52.384 [2024-11-15 10:32:52.964508] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.384 [2024-11-15 10:32:52.964530] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d9bc0, cid 3, qid 0 00:14:52.384 [2024-11-15 10:32:52.964608] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.384 [2024-11-15 10:32:52.964615] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.384 [2024-11-15 10:32:52.964619] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.384 [2024-11-15 10:32:52.964624] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d9bc0) on tqpair=0x1975750 00:14:52.384 [2024-11-15 10:32:52.964630] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:14:52.384 [2024-11-15 10:32:52.964635] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:14:52.384 [2024-11-15 10:32:52.964646] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.384 [2024-11-15 10:32:52.964653] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.384 [2024-11-15 10:32:52.964657] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1975750) 00:14:52.384 [2024-11-15 10:32:52.964665] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.384 [2024-11-15 10:32:52.964683] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d9bc0, cid 3, qid 0 00:14:52.384 [2024-11-15 10:32:52.964743] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.384 [2024-11-15 10:32:52.964751] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.384 [2024-11-15 10:32:52.964754] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.384 [2024-11-15 10:32:52.964759] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d9bc0) on tqpair=0x1975750 00:14:52.384 [2024-11-15 10:32:52.964771] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.384 [2024-11-15 10:32:52.964777] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.384 [2024-11-15 10:32:52.964781] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1975750) 00:14:52.384 [2024-11-15 10:32:52.964789] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.384 [2024-11-15 10:32:52.964806] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d9bc0, cid 3, qid 0 00:14:52.384 [2024-11-15 10:32:52.964864] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.384 [2024-11-15 10:32:52.964871] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.384 [2024-11-15 10:32:52.964875] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.384 [2024-11-15 10:32:52.964879] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d9bc0) on tqpair=0x1975750 00:14:52.384 [2024-11-15 10:32:52.964890] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.384 [2024-11-15 10:32:52.964895] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.384 [2024-11-15 10:32:52.964900] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1975750) 00:14:52.384 [2024-11-15 10:32:52.964907] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.384 [2024-11-15 10:32:52.964925] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d9bc0, cid 3, qid 0 00:14:52.384 [2024-11-15 10:32:52.964976] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.384 [2024-11-15 10:32:52.964983] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.384 [2024-11-15 10:32:52.964987] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.384 [2024-11-15 10:32:52.964992] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d9bc0) on tqpair=0x1975750 00:14:52.384 [2024-11-15 10:32:52.965003] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.384 [2024-11-15 10:32:52.965008] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.384 [2024-11-15 10:32:52.965012] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1975750) 00:14:52.384 [2024-11-15 10:32:52.965020] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.384 [2024-11-15 10:32:52.965037] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d9bc0, cid 3, qid 0 00:14:52.384 [2024-11-15 10:32:52.965110] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.384 [2024-11-15 10:32:52.965120] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.384 [2024-11-15 10:32:52.965124] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.384 [2024-11-15 10:32:52.965129] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d9bc0) on tqpair=0x1975750 00:14:52.384 [2024-11-15 10:32:52.965140] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.385 [2024-11-15 10:32:52.965146] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.385 [2024-11-15 10:32:52.965150] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1975750) 00:14:52.385 [2024-11-15 10:32:52.965158] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.385 [2024-11-15 10:32:52.965179] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d9bc0, cid 3, qid 0 00:14:52.385 [2024-11-15 10:32:52.965229] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.385 [2024-11-15 10:32:52.965237] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.385 [2024-11-15 10:32:52.965241] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.385 [2024-11-15 10:32:52.965245] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d9bc0) on tqpair=0x1975750 00:14:52.385 [2024-11-15 10:32:52.965256] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.385 [2024-11-15 10:32:52.965261] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.385 [2024-11-15 10:32:52.965266] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1975750) 00:14:52.385 [2024-11-15 10:32:52.965273] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.385 [2024-11-15 10:32:52.965291] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d9bc0, cid 3, qid 0 00:14:52.385 [2024-11-15 10:32:52.965348] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.385 [2024-11-15 10:32:52.965355] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.385 [2024-11-15 10:32:52.965359] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.385 [2024-11-15 10:32:52.965364] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d9bc0) on tqpair=0x1975750 00:14:52.385 [2024-11-15 10:32:52.965375] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.385 [2024-11-15 10:32:52.965380] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.385 [2024-11-15 10:32:52.965384] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1975750) 00:14:52.385 [2024-11-15 10:32:52.965392] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.385 [2024-11-15 10:32:52.965410] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d9bc0, cid 3, qid 0 00:14:52.385 [2024-11-15 10:32:52.965462] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.385 [2024-11-15 10:32:52.965469] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.385 [2024-11-15 10:32:52.965473] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.385 [2024-11-15 10:32:52.965478] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d9bc0) on tqpair=0x1975750 00:14:52.385 [2024-11-15 10:32:52.965489] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.385 [2024-11-15 10:32:52.965494] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.385 [2024-11-15 10:32:52.965498] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1975750) 00:14:52.385 [2024-11-15 10:32:52.965506] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.385 [2024-11-15 10:32:52.965523] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d9bc0, cid 3, qid 0 00:14:52.385 [2024-11-15 10:32:52.965580] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.385 [2024-11-15 10:32:52.965588] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.385 [2024-11-15 10:32:52.965592] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.385 [2024-11-15 10:32:52.965596] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d9bc0) on tqpair=0x1975750 00:14:52.385 [2024-11-15 10:32:52.965607] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.385 [2024-11-15 10:32:52.965612] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.385 [2024-11-15 10:32:52.965616] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1975750) 00:14:52.385 [2024-11-15 10:32:52.965624] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.385 [2024-11-15 10:32:52.965642] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d9bc0, cid 3, qid 0 00:14:52.385 [2024-11-15 10:32:52.965693] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.385 [2024-11-15 10:32:52.965701] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.385 [2024-11-15 10:32:52.965704] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.385 [2024-11-15 10:32:52.965709] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d9bc0) on tqpair=0x1975750 00:14:52.385 [2024-11-15 10:32:52.965720] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.385 [2024-11-15 10:32:52.965726] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.385 [2024-11-15 10:32:52.965730] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1975750) 00:14:52.385 [2024-11-15 10:32:52.965738] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.385 [2024-11-15 10:32:52.965755] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d9bc0, cid 3, qid 0 00:14:52.385 [2024-11-15 10:32:52.965810] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.385 [2024-11-15 10:32:52.965817] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.385 [2024-11-15 10:32:52.965821] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.385 [2024-11-15 10:32:52.965825] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d9bc0) on tqpair=0x1975750 00:14:52.385 [2024-11-15 10:32:52.965836] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.385 [2024-11-15 10:32:52.965842] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.385 [2024-11-15 10:32:52.965846] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1975750) 00:14:52.385 [2024-11-15 10:32:52.965854] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.385 [2024-11-15 10:32:52.965872] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d9bc0, cid 3, qid 0 00:14:52.385 [2024-11-15 10:32:52.965926] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.385 [2024-11-15 10:32:52.965933] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.385 [2024-11-15 10:32:52.965937] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.385 [2024-11-15 10:32:52.965941] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d9bc0) on tqpair=0x1975750 00:14:52.385 [2024-11-15 10:32:52.965952] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.385 [2024-11-15 10:32:52.965958] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.385 [2024-11-15 10:32:52.965962] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1975750) 00:14:52.385 [2024-11-15 10:32:52.965970] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.385 [2024-11-15 10:32:52.965987] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d9bc0, cid 3, qid 0 00:14:52.385 [2024-11-15 10:32:52.966038] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.385 [2024-11-15 10:32:52.966046] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.385 [2024-11-15 10:32:52.966065] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.385 [2024-11-15 10:32:52.966070] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d9bc0) on tqpair=0x1975750 00:14:52.385 [2024-11-15 10:32:52.966083] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.385 [2024-11-15 10:32:52.966089] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.385 [2024-11-15 10:32:52.966093] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1975750) 00:14:52.385 [2024-11-15 10:32:52.966104] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.385 [2024-11-15 10:32:52.966125] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d9bc0, cid 3, qid 0 00:14:52.385 [2024-11-15 10:32:52.966181] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.385 [2024-11-15 10:32:52.966188] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.385 [2024-11-15 10:32:52.966192] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.385 [2024-11-15 10:32:52.966197] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d9bc0) on tqpair=0x1975750 00:14:52.385 [2024-11-15 10:32:52.966208] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.386 [2024-11-15 10:32:52.966213] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.386 [2024-11-15 10:32:52.966217] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1975750) 00:14:52.386 [2024-11-15 10:32:52.966225] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.386 [2024-11-15 10:32:52.966243] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d9bc0, cid 3, qid 0 00:14:52.386 [2024-11-15 10:32:52.966297] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.386 [2024-11-15 10:32:52.966304] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.386 [2024-11-15 10:32:52.966308] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.386 [2024-11-15 10:32:52.966312] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d9bc0) on tqpair=0x1975750 00:14:52.386 [2024-11-15 10:32:52.966323] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.386 [2024-11-15 10:32:52.966329] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.386 [2024-11-15 10:32:52.966333] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1975750) 00:14:52.386 [2024-11-15 10:32:52.966340] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.386 [2024-11-15 10:32:52.966358] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d9bc0, cid 3, qid 0 00:14:52.386 [2024-11-15 10:32:52.966411] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.386 [2024-11-15 10:32:52.966418] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.386 [2024-11-15 10:32:52.966422] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.386 [2024-11-15 10:32:52.966427] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d9bc0) on tqpair=0x1975750 00:14:52.386 [2024-11-15 10:32:52.966437] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.386 [2024-11-15 10:32:52.966443] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.386 [2024-11-15 10:32:52.966447] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1975750) 00:14:52.386 [2024-11-15 10:32:52.966455] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.386 [2024-11-15 10:32:52.966473] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d9bc0, cid 3, qid 0 00:14:52.386 [2024-11-15 10:32:52.966527] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.386 [2024-11-15 10:32:52.966534] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.386 [2024-11-15 10:32:52.966538] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.386 [2024-11-15 10:32:52.966542] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d9bc0) on tqpair=0x1975750 00:14:52.386 [2024-11-15 10:32:52.966553] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.386 [2024-11-15 10:32:52.966559] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.386 [2024-11-15 10:32:52.966563] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1975750) 00:14:52.386 [2024-11-15 10:32:52.966571] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.386 [2024-11-15 10:32:52.966588] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d9bc0, cid 3, qid 0 00:14:52.386 [2024-11-15 10:32:52.966645] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.386 [2024-11-15 10:32:52.966653] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.386 [2024-11-15 10:32:52.966657] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.386 [2024-11-15 10:32:52.966661] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d9bc0) on tqpair=0x1975750 00:14:52.386 [2024-11-15 10:32:52.966672] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.386 [2024-11-15 10:32:52.966677] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.386 [2024-11-15 10:32:52.966681] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1975750) 00:14:52.386 [2024-11-15 10:32:52.966689] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.386 [2024-11-15 10:32:52.966707] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d9bc0, cid 3, qid 0 00:14:52.386 [2024-11-15 10:32:52.966758] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.386 [2024-11-15 10:32:52.966765] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.386 [2024-11-15 10:32:52.966769] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.386 [2024-11-15 10:32:52.966773] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d9bc0) on tqpair=0x1975750 00:14:52.386 [2024-11-15 10:32:52.966784] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.386 [2024-11-15 10:32:52.966789] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.386 [2024-11-15 10:32:52.966794] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1975750) 00:14:52.386 [2024-11-15 10:32:52.966801] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.386 [2024-11-15 10:32:52.966819] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d9bc0, cid 3, qid 0 00:14:52.386 [2024-11-15 10:32:52.966870] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.386 [2024-11-15 10:32:52.966891] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.386 [2024-11-15 10:32:52.966895] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.386 [2024-11-15 10:32:52.966900] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d9bc0) on tqpair=0x1975750 00:14:52.386 [2024-11-15 10:32:52.966911] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.386 [2024-11-15 10:32:52.966917] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.386 [2024-11-15 10:32:52.966921] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1975750) 00:14:52.386 [2024-11-15 10:32:52.966929] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.386 [2024-11-15 10:32:52.966947] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d9bc0, cid 3, qid 0 00:14:52.386 [2024-11-15 10:32:52.967001] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.386 [2024-11-15 10:32:52.967008] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.386 [2024-11-15 10:32:52.967012] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.386 [2024-11-15 10:32:52.967016] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d9bc0) on tqpair=0x1975750 00:14:52.386 [2024-11-15 10:32:52.967027] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.386 [2024-11-15 10:32:52.967033] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.386 [2024-11-15 10:32:52.967037] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1975750) 00:14:52.386 [2024-11-15 10:32:52.967045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.386 [2024-11-15 10:32:52.967078] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d9bc0, cid 3, qid 0 00:14:52.386 [2024-11-15 10:32:52.967134] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.386 [2024-11-15 10:32:52.967141] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.386 [2024-11-15 10:32:52.967145] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.387 [2024-11-15 10:32:52.967150] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d9bc0) on tqpair=0x1975750 00:14:52.387 [2024-11-15 10:32:52.967161] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.387 [2024-11-15 10:32:52.967166] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.387 [2024-11-15 10:32:52.967171] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1975750) 00:14:52.387 [2024-11-15 10:32:52.967178] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.387 [2024-11-15 10:32:52.967196] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d9bc0, cid 3, qid 0 00:14:52.387 [2024-11-15 10:32:52.967248] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.387 [2024-11-15 10:32:52.967255] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.387 [2024-11-15 10:32:52.967259] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.387 [2024-11-15 10:32:52.967263] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d9bc0) on tqpair=0x1975750 00:14:52.387 [2024-11-15 10:32:52.967274] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.387 [2024-11-15 10:32:52.967279] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.387 [2024-11-15 10:32:52.967284] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1975750) 00:14:52.387 [2024-11-15 10:32:52.967291] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.387 [2024-11-15 10:32:52.967309] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d9bc0, cid 3, qid 0 00:14:52.387 [2024-11-15 10:32:52.967357] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.387 [2024-11-15 10:32:52.967365] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.387 [2024-11-15 10:32:52.967369] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.387 [2024-11-15 10:32:52.967373] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d9bc0) on tqpair=0x1975750 00:14:52.387 [2024-11-15 10:32:52.967384] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.387 [2024-11-15 10:32:52.967389] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.387 [2024-11-15 10:32:52.967393] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1975750) 00:14:52.387 [2024-11-15 10:32:52.967401] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.387 [2024-11-15 10:32:52.967417] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d9bc0, cid 3, qid 0 00:14:52.387 [2024-11-15 10:32:52.967481] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.387 [2024-11-15 10:32:52.967488] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.387 [2024-11-15 10:32:52.967492] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.387 [2024-11-15 10:32:52.967509] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d9bc0) on tqpair=0x1975750 00:14:52.387 [2024-11-15 10:32:52.967521] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.387 [2024-11-15 10:32:52.967526] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.387 [2024-11-15 10:32:52.967530] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1975750) 00:14:52.387 [2024-11-15 10:32:52.967538] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.387 [2024-11-15 10:32:52.967557] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d9bc0, cid 3, qid 0 00:14:52.387 [2024-11-15 10:32:52.967617] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.387 [2024-11-15 10:32:52.967625] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.387 [2024-11-15 10:32:52.967629] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.387 [2024-11-15 10:32:52.967633] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d9bc0) on tqpair=0x1975750 00:14:52.387 [2024-11-15 10:32:52.967644] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.387 [2024-11-15 10:32:52.967649] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.387 [2024-11-15 10:32:52.967653] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1975750) 00:14:52.387 [2024-11-15 10:32:52.967661] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.387 [2024-11-15 10:32:52.967678] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d9bc0, cid 3, qid 0 00:14:52.387 [2024-11-15 10:32:52.967730] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.387 [2024-11-15 10:32:52.967737] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.387 [2024-11-15 10:32:52.967741] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.387 [2024-11-15 10:32:52.967746] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d9bc0) on tqpair=0x1975750 00:14:52.387 [2024-11-15 10:32:52.967768] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.387 [2024-11-15 10:32:52.967773] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.387 [2024-11-15 10:32:52.967777] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1975750) 00:14:52.387 [2024-11-15 10:32:52.967785] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.387 [2024-11-15 10:32:52.967803] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d9bc0, cid 3, qid 0 00:14:52.387 [2024-11-15 10:32:52.967851] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.387 [2024-11-15 10:32:52.967858] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.387 [2024-11-15 10:32:52.967862] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.387 [2024-11-15 10:32:52.967866] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d9bc0) on tqpair=0x1975750 00:14:52.387 [2024-11-15 10:32:52.967877] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.387 [2024-11-15 10:32:52.967882] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.387 [2024-11-15 10:32:52.967886] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1975750) 00:14:52.387 [2024-11-15 10:32:52.967894] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.387 [2024-11-15 10:32:52.967911] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d9bc0, cid 3, qid 0 00:14:52.387 [2024-11-15 10:32:52.967960] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.387 [2024-11-15 10:32:52.967968] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.387 [2024-11-15 10:32:52.967971] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.387 [2024-11-15 10:32:52.967976] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d9bc0) on tqpair=0x1975750 00:14:52.387 [2024-11-15 10:32:52.967987] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.387 [2024-11-15 10:32:52.967992] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.387 [2024-11-15 10:32:52.967996] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1975750) 00:14:52.387 [2024-11-15 10:32:52.968004] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.387 [2024-11-15 10:32:52.968021] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d9bc0, cid 3, qid 0 00:14:52.387 [2024-11-15 10:32:52.972072] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.387 [2024-11-15 10:32:52.972087] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.387 [2024-11-15 10:32:52.972092] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.387 [2024-11-15 10:32:52.972097] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d9bc0) on tqpair=0x1975750 00:14:52.387 [2024-11-15 10:32:52.972112] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:52.387 [2024-11-15 10:32:52.972118] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:52.387 [2024-11-15 10:32:52.972122] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1975750) 00:14:52.387 [2024-11-15 10:32:52.972131] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.387 [2024-11-15 10:32:52.972157] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d9bc0, cid 3, qid 0 00:14:52.387 [2024-11-15 10:32:52.972219] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:52.387 [2024-11-15 10:32:52.972227] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:52.387 [2024-11-15 10:32:52.972231] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:52.387 [2024-11-15 10:32:52.972235] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d9bc0) on tqpair=0x1975750 00:14:52.387 [2024-11-15 10:32:52.972245] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:14:52.387 0% 00:14:52.387 Data Units Read: 0 00:14:52.387 Data Units Written: 0 00:14:52.387 Host Read Commands: 0 00:14:52.388 Host Write Commands: 0 00:14:52.388 Controller Busy Time: 0 minutes 00:14:52.388 Power Cycles: 0 00:14:52.388 Power On Hours: 0 hours 00:14:52.388 Unsafe Shutdowns: 0 00:14:52.388 Unrecoverable Media Errors: 0 00:14:52.388 Lifetime Error Log Entries: 0 00:14:52.388 Warning Temperature Time: 0 minutes 00:14:52.388 Critical Temperature Time: 0 minutes 00:14:52.388 00:14:52.388 Number of Queues 00:14:52.388 ================ 00:14:52.388 Number of I/O Submission Queues: 127 00:14:52.388 Number of I/O Completion Queues: 127 00:14:52.388 00:14:52.388 Active Namespaces 00:14:52.388 ================= 00:14:52.388 Namespace ID:1 00:14:52.388 Error Recovery Timeout: Unlimited 00:14:52.388 Command Set Identifier: NVM (00h) 00:14:52.388 Deallocate: Supported 00:14:52.388 Deallocated/Unwritten Error: Not Supported 00:14:52.388 Deallocated Read Value: Unknown 00:14:52.388 Deallocate in Write Zeroes: Not Supported 00:14:52.388 Deallocated Guard Field: 0xFFFF 00:14:52.388 Flush: Supported 00:14:52.388 Reservation: Supported 00:14:52.388 Namespace Sharing Capabilities: Multiple Controllers 00:14:52.388 Size (in LBAs): 131072 (0GiB) 00:14:52.388 Capacity (in LBAs): 131072 (0GiB) 00:14:52.388 Utilization (in LBAs): 131072 (0GiB) 00:14:52.388 NGUID: ABCDEF0123456789ABCDEF0123456789 00:14:52.388 EUI64: ABCDEF0123456789 00:14:52.388 UUID: 7fc207f9-7f3d-4541-86a3-f9c64def9c5e 00:14:52.388 Thin Provisioning: Not Supported 00:14:52.388 Per-NS Atomic Units: Yes 00:14:52.388 Atomic Boundary Size (Normal): 0 00:14:52.388 Atomic Boundary Size (PFail): 0 00:14:52.388 Atomic Boundary Offset: 0 00:14:52.388 Maximum Single Source Range Length: 65535 00:14:52.388 Maximum Copy Length: 65535 00:14:52.388 Maximum Source Range Count: 1 00:14:52.388 NGUID/EUI64 Never Reused: No 00:14:52.388 Namespace Write Protected: No 00:14:52.388 Number of LBA Formats: 1 00:14:52.388 Current LBA Format: LBA Format #00 00:14:52.388 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:52.388 00:14:52.388 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:14:52.647 10:32:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:52.648 10:32:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.648 10:32:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:52.648 10:32:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.648 10:32:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:14:52.648 10:32:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:14:52.648 10:32:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:52.648 10:32:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:14:52.648 10:32:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:52.648 10:32:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:14:52.648 10:32:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:52.648 10:32:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:52.648 rmmod nvme_tcp 00:14:52.648 rmmod nvme_fabrics 00:14:52.648 rmmod nvme_keyring 00:14:52.648 10:32:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:52.648 10:32:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:14:52.648 10:32:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:14:52.648 10:32:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 74205 ']' 00:14:52.648 10:32:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 74205 00:14:52.648 10:32:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # '[' -z 74205 ']' 00:14:52.648 10:32:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # kill -0 74205 00:14:52.648 10:32:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # uname 00:14:52.648 10:32:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:52.648 10:32:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74205 00:14:52.648 killing process with pid 74205 00:14:52.648 10:32:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:52.648 10:32:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:52.648 10:32:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74205' 00:14:52.648 10:32:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@971 -- # kill 74205 00:14:52.648 10:32:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@976 -- # wait 74205 00:14:52.907 10:32:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:52.907 10:32:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:52.907 10:32:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:52.907 10:32:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:14:52.907 10:32:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:14:52.907 10:32:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:52.907 10:32:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:14:52.907 10:32:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:52.907 10:32:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:52.907 10:32:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:52.907 10:32:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:52.907 10:32:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:52.907 10:32:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:52.907 10:32:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:52.907 10:32:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:52.907 10:32:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:52.907 10:32:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:52.907 10:32:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:52.907 10:32:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:52.907 10:32:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:52.907 10:32:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:52.907 10:32:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:53.166 10:32:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:53.166 10:32:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:53.166 10:32:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:53.166 10:32:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:53.166 10:32:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:14:53.166 00:14:53.166 real 0m2.550s 00:14:53.166 user 0m5.710s 00:14:53.166 sys 0m0.767s 00:14:53.166 10:32:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:53.166 ************************************ 00:14:53.166 END TEST nvmf_identify 00:14:53.166 ************************************ 00:14:53.166 10:32:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:53.166 10:32:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:14:53.166 10:32:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:53.166 10:32:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:53.166 10:32:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:14:53.166 ************************************ 00:14:53.166 START TEST nvmf_perf 00:14:53.166 ************************************ 00:14:53.166 10:32:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:14:53.166 * Looking for test storage... 00:14:53.166 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:53.166 10:32:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:53.166 10:32:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:14:53.166 10:32:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:53.427 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:53.427 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:53.427 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:53.427 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:53.427 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:14:53.427 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:53.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.428 --rc genhtml_branch_coverage=1 00:14:53.428 --rc genhtml_function_coverage=1 00:14:53.428 --rc genhtml_legend=1 00:14:53.428 --rc geninfo_all_blocks=1 00:14:53.428 --rc geninfo_unexecuted_blocks=1 00:14:53.428 00:14:53.428 ' 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:53.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.428 --rc genhtml_branch_coverage=1 00:14:53.428 --rc genhtml_function_coverage=1 00:14:53.428 --rc genhtml_legend=1 00:14:53.428 --rc geninfo_all_blocks=1 00:14:53.428 --rc geninfo_unexecuted_blocks=1 00:14:53.428 00:14:53.428 ' 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:53.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.428 --rc genhtml_branch_coverage=1 00:14:53.428 --rc genhtml_function_coverage=1 00:14:53.428 --rc genhtml_legend=1 00:14:53.428 --rc geninfo_all_blocks=1 00:14:53.428 --rc geninfo_unexecuted_blocks=1 00:14:53.428 00:14:53.428 ' 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:53.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.428 --rc genhtml_branch_coverage=1 00:14:53.428 --rc genhtml_function_coverage=1 00:14:53.428 --rc genhtml_legend=1 00:14:53.428 --rc geninfo_all_blocks=1 00:14:53.428 --rc geninfo_unexecuted_blocks=1 00:14:53.428 00:14:53.428 ' 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:53.428 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:53.428 Cannot find device "nvmf_init_br" 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:53.428 Cannot find device "nvmf_init_br2" 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:53.428 Cannot find device "nvmf_tgt_br" 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:53.428 Cannot find device "nvmf_tgt_br2" 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:53.428 Cannot find device "nvmf_init_br" 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:53.428 Cannot find device "nvmf_init_br2" 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:53.428 Cannot find device "nvmf_tgt_br" 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:53.428 Cannot find device "nvmf_tgt_br2" 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:53.428 Cannot find device "nvmf_br" 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:53.428 Cannot find device "nvmf_init_if" 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:53.428 Cannot find device "nvmf_init_if2" 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:53.428 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:53.428 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:53.428 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:53.689 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:53.689 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:53.689 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:53.689 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:53.689 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:53.689 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:53.689 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:53.689 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:53.689 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:53.689 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:53.689 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:53.689 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:53.689 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:53.689 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:53.689 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:53.689 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:53.689 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:53.689 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:53.689 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:53.689 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:53.689 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:53.689 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:53.689 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:53.689 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:53.689 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:53.689 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:53.689 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:53.689 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:53.689 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:53.689 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.105 ms 00:14:53.689 00:14:53.689 --- 10.0.0.3 ping statistics --- 00:14:53.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.689 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:14:53.689 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:53.689 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:53.689 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.067 ms 00:14:53.689 00:14:53.689 --- 10.0.0.4 ping statistics --- 00:14:53.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.689 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:14:53.689 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:53.689 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:53.689 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.057 ms 00:14:53.689 00:14:53.689 --- 10.0.0.1 ping statistics --- 00:14:53.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.689 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:14:53.689 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:53.689 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:53.689 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:14:53.689 00:14:53.689 --- 10.0.0.2 ping statistics --- 00:14:53.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.690 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:14:53.690 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:53.690 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:14:53.690 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:53.690 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:53.690 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:53.690 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:53.690 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:53.690 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:53.690 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:53.690 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:14:53.690 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:53.690 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:53.690 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:53.690 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=74457 00:14:53.690 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:53.690 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 74457 00:14:53.690 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # '[' -z 74457 ']' 00:14:53.690 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:53.690 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:53.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:53.690 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:53.690 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:53.690 10:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:54.015 [2024-11-15 10:32:54.568417] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:14:54.015 [2024-11-15 10:32:54.568502] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:54.015 [2024-11-15 10:32:54.714834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:54.015 [2024-11-15 10:32:54.781982] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:54.015 [2024-11-15 10:32:54.782044] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:54.015 [2024-11-15 10:32:54.782084] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:54.015 [2024-11-15 10:32:54.782095] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:54.015 [2024-11-15 10:32:54.782104] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:54.015 [2024-11-15 10:32:54.783448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:54.015 [2024-11-15 10:32:54.783604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:54.015 [2024-11-15 10:32:54.783870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:54.015 [2024-11-15 10:32:54.783891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.274 [2024-11-15 10:32:54.840031] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:54.842 10:32:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:54.842 10:32:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@866 -- # return 0 00:14:54.842 10:32:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:54.842 10:32:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:54.842 10:32:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:54.842 10:32:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:54.842 10:32:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:54.842 10:32:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:14:55.411 10:32:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:14:55.411 10:32:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:14:55.670 10:32:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:14:55.670 10:32:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:55.931 10:32:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:14:55.931 10:32:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:14:55.931 10:32:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:14:55.931 10:32:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:14:55.931 10:32:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:56.190 [2024-11-15 10:32:56.966383] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:56.190 10:32:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:56.448 10:32:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:14:56.448 10:32:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:56.707 10:32:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:14:56.707 10:32:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:14:56.965 10:32:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:57.275 [2024-11-15 10:32:58.037775] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:57.275 10:32:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:14:57.843 10:32:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:14:57.843 10:32:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:14:57.843 10:32:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:14:57.843 10:32:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:14:58.780 Initializing NVMe Controllers 00:14:58.780 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:58.780 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:14:58.780 Initialization complete. Launching workers. 00:14:58.780 ======================================================== 00:14:58.780 Latency(us) 00:14:58.780 Device Information : IOPS MiB/s Average min max 00:14:58.780 PCIE (0000:00:10.0) NSID 1 from core 0: 23451.90 91.61 1364.41 345.93 7688.49 00:14:58.780 ======================================================== 00:14:58.780 Total : 23451.90 91.61 1364.41 345.93 7688.49 00:14:58.780 00:14:58.780 10:32:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:00.157 Initializing NVMe Controllers 00:15:00.157 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:00.157 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:00.157 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:00.157 Initialization complete. Launching workers. 00:15:00.157 ======================================================== 00:15:00.157 Latency(us) 00:15:00.157 Device Information : IOPS MiB/s Average min max 00:15:00.157 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3028.04 11.83 325.78 114.34 5134.79 00:15:00.157 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 120.52 0.47 8296.41 6034.54 16050.59 00:15:00.157 ======================================================== 00:15:00.157 Total : 3148.57 12.30 630.89 114.34 16050.59 00:15:00.157 00:15:00.157 10:33:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:01.532 Initializing NVMe Controllers 00:15:01.532 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:01.532 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:01.532 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:01.532 Initialization complete. Launching workers. 00:15:01.532 ======================================================== 00:15:01.532 Latency(us) 00:15:01.532 Device Information : IOPS MiB/s Average min max 00:15:01.532 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7639.39 29.84 4192.60 656.42 13206.53 00:15:01.532 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3081.14 12.04 10399.61 6539.98 39332.70 00:15:01.532 ======================================================== 00:15:01.532 Total : 10720.54 41.88 5976.53 656.42 39332.70 00:15:01.532 00:15:01.532 10:33:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:15:01.532 10:33:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:04.063 Initializing NVMe Controllers 00:15:04.063 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:04.063 Controller IO queue size 128, less than required. 00:15:04.063 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:04.063 Controller IO queue size 128, less than required. 00:15:04.063 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:04.063 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:04.063 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:04.063 Initialization complete. Launching workers. 00:15:04.063 ======================================================== 00:15:04.063 Latency(us) 00:15:04.063 Device Information : IOPS MiB/s Average min max 00:15:04.063 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1602.36 400.59 81776.74 42000.96 134692.87 00:15:04.063 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 650.44 162.61 201807.27 83875.50 302608.17 00:15:04.063 ======================================================== 00:15:04.063 Total : 2252.80 563.20 116432.69 42000.96 302608.17 00:15:04.063 00:15:04.063 10:33:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:15:04.321 Initializing NVMe Controllers 00:15:04.321 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:04.321 Controller IO queue size 128, less than required. 00:15:04.321 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:04.321 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:15:04.321 Controller IO queue size 128, less than required. 00:15:04.321 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:04.321 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:15:04.321 WARNING: Some requested NVMe devices were skipped 00:15:04.321 No valid NVMe controllers or AIO or URING devices found 00:15:04.321 10:33:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:15:06.869 Initializing NVMe Controllers 00:15:06.869 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:06.869 Controller IO queue size 128, less than required. 00:15:06.869 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:06.869 Controller IO queue size 128, less than required. 00:15:06.869 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:06.869 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:06.869 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:06.869 Initialization complete. Launching workers. 00:15:06.869 00:15:06.869 ==================== 00:15:06.869 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:15:06.869 TCP transport: 00:15:06.869 polls: 10760 00:15:06.869 idle_polls: 7595 00:15:06.869 sock_completions: 3165 00:15:06.869 nvme_completions: 5923 00:15:06.869 submitted_requests: 8864 00:15:06.869 queued_requests: 1 00:15:06.869 00:15:06.869 ==================== 00:15:06.869 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:15:06.869 TCP transport: 00:15:06.869 polls: 11529 00:15:06.869 idle_polls: 7700 00:15:06.869 sock_completions: 3829 00:15:06.869 nvme_completions: 6623 00:15:06.869 submitted_requests: 9918 00:15:06.869 queued_requests: 1 00:15:06.869 ======================================================== 00:15:06.869 Latency(us) 00:15:06.869 Device Information : IOPS MiB/s Average min max 00:15:06.869 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1477.23 369.31 87606.92 37428.42 195130.05 00:15:06.869 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1651.84 412.96 78573.45 35964.66 148031.13 00:15:06.869 ======================================================== 00:15:06.869 Total : 3129.06 782.27 82838.14 35964.66 195130.05 00:15:06.869 00:15:06.869 10:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:15:07.128 10:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:07.388 10:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:15:07.388 10:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:15:07.388 10:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:15:07.388 10:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:07.388 10:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:15:07.388 10:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:07.388 10:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:15:07.388 10:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:07.388 10:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:07.388 rmmod nvme_tcp 00:15:07.388 rmmod nvme_fabrics 00:15:07.388 rmmod nvme_keyring 00:15:07.388 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:07.388 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:15:07.388 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:15:07.388 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 74457 ']' 00:15:07.388 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 74457 00:15:07.388 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # '[' -z 74457 ']' 00:15:07.388 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # kill -0 74457 00:15:07.388 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # uname 00:15:07.388 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:07.388 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74457 00:15:07.388 killing process with pid 74457 00:15:07.388 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:07.388 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:07.388 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74457' 00:15:07.388 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@971 -- # kill 74457 00:15:07.388 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@976 -- # wait 74457 00:15:07.957 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:07.957 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:07.957 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:07.957 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:15:07.957 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:15:07.957 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:07.957 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:15:07.957 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:07.957 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:07.957 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:07.957 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:07.957 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:07.957 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:08.217 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:08.217 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:08.217 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:08.217 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:08.217 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:08.217 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:08.217 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:08.217 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:08.217 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:08.217 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:08.217 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:08.217 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:08.217 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:08.217 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:15:08.217 00:15:08.217 real 0m15.136s 00:15:08.217 user 0m54.859s 00:15:08.217 sys 0m4.220s 00:15:08.217 10:33:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:08.217 ************************************ 00:15:08.217 END TEST nvmf_perf 00:15:08.217 ************************************ 00:15:08.217 10:33:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:08.217 10:33:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:08.217 10:33:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:08.217 10:33:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:08.217 10:33:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:08.217 ************************************ 00:15:08.217 START TEST nvmf_fio_host 00:15:08.217 ************************************ 00:15:08.217 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:08.478 * Looking for test storage... 00:15:08.478 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:08.478 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:08.478 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:15:08.478 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:08.478 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:08.479 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:08.479 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:08.479 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:08.479 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:15:08.479 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:15:08.479 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:15:08.479 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:15:08.479 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:15:08.479 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:15:08.479 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:15:08.479 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:08.479 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:15:08.479 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:15:08.479 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:08.479 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:08.479 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:15:08.479 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:15:08.479 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:08.479 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:15:08.479 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:15:08.479 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:15:08.479 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:15:08.479 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:08.479 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:15:08.479 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:15:08.479 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:08.479 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:08.479 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:15:08.479 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:08.479 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:08.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:08.479 --rc genhtml_branch_coverage=1 00:15:08.479 --rc genhtml_function_coverage=1 00:15:08.479 --rc genhtml_legend=1 00:15:08.479 --rc geninfo_all_blocks=1 00:15:08.479 --rc geninfo_unexecuted_blocks=1 00:15:08.479 00:15:08.479 ' 00:15:08.479 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:08.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:08.479 --rc genhtml_branch_coverage=1 00:15:08.479 --rc genhtml_function_coverage=1 00:15:08.479 --rc genhtml_legend=1 00:15:08.479 --rc geninfo_all_blocks=1 00:15:08.479 --rc geninfo_unexecuted_blocks=1 00:15:08.479 00:15:08.479 ' 00:15:08.479 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:08.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:08.479 --rc genhtml_branch_coverage=1 00:15:08.479 --rc genhtml_function_coverage=1 00:15:08.479 --rc genhtml_legend=1 00:15:08.479 --rc geninfo_all_blocks=1 00:15:08.479 --rc geninfo_unexecuted_blocks=1 00:15:08.479 00:15:08.479 ' 00:15:08.479 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:08.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:08.479 --rc genhtml_branch_coverage=1 00:15:08.479 --rc genhtml_function_coverage=1 00:15:08.479 --rc genhtml_legend=1 00:15:08.479 --rc geninfo_all_blocks=1 00:15:08.479 --rc geninfo_unexecuted_blocks=1 00:15:08.479 00:15:08.479 ' 00:15:08.479 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:08.479 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:15:08.479 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:08.479 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:08.479 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:08.479 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.479 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.479 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.479 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:15:08.479 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.479 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:08.479 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:15:08.479 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:08.479 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:08.479 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:08.479 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:08.479 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:08.479 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:08.479 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:08.479 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:08.479 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:08.479 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:08.479 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:15:08.479 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:15:08.479 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:08.479 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:08.479 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:08.479 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:08.479 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:08.479 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:15:08.479 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:08.480 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:08.480 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:08.480 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.480 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.480 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.480 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:15:08.480 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.480 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:15:08.480 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:08.480 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:08.480 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:08.480 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:08.480 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:08.480 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:08.480 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:08.480 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:08.480 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:08.480 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:08.480 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:08.480 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:15:08.480 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:08.480 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:08.480 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:08.480 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:08.480 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:08.480 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:08.480 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:08.480 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:08.480 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:08.480 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:08.480 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:08.480 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:08.480 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:08.480 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:08.480 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:08.480 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:08.480 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:08.480 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:08.480 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:08.480 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:08.480 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:08.480 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:08.480 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:08.480 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:08.480 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:08.480 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:08.480 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:08.480 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:08.480 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:08.480 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:08.480 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:08.480 Cannot find device "nvmf_init_br" 00:15:08.480 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:15:08.480 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:08.480 Cannot find device "nvmf_init_br2" 00:15:08.480 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:15:08.480 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:08.480 Cannot find device "nvmf_tgt_br" 00:15:08.480 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:15:08.480 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:08.480 Cannot find device "nvmf_tgt_br2" 00:15:08.480 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:15:08.480 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:08.739 Cannot find device "nvmf_init_br" 00:15:08.739 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:15:08.739 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:08.739 Cannot find device "nvmf_init_br2" 00:15:08.739 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:15:08.739 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:08.739 Cannot find device "nvmf_tgt_br" 00:15:08.739 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:15:08.739 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:08.739 Cannot find device "nvmf_tgt_br2" 00:15:08.739 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:15:08.739 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:08.739 Cannot find device "nvmf_br" 00:15:08.739 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:15:08.739 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:08.739 Cannot find device "nvmf_init_if" 00:15:08.739 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:15:08.739 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:08.739 Cannot find device "nvmf_init_if2" 00:15:08.739 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:15:08.739 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:08.739 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:08.739 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:15:08.739 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:08.739 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:08.739 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:15:08.739 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:08.739 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:08.739 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:08.739 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:08.739 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:08.739 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:08.739 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:08.739 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:08.739 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:08.739 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:08.739 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:08.739 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:08.739 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:08.739 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:08.739 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:08.739 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:08.739 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:08.739 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:08.739 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:08.739 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:08.999 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:08.999 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:08.999 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:08.999 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:08.999 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:08.999 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:08.999 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:08.999 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:08.999 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:08.999 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:08.999 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:08.999 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:08.999 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:08.999 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:08.999 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.099 ms 00:15:08.999 00:15:08.999 --- 10.0.0.3 ping statistics --- 00:15:08.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:08.999 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:15:08.999 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:08.999 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:08.999 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:15:08.999 00:15:08.999 --- 10.0.0.4 ping statistics --- 00:15:08.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:08.999 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:15:08.999 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:08.999 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:08.999 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:15:08.999 00:15:08.999 --- 10.0.0.1 ping statistics --- 00:15:08.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:08.999 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:15:08.999 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:08.999 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:08.999 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:15:08.999 00:15:08.999 --- 10.0.0.2 ping statistics --- 00:15:08.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:08.999 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:15:08.999 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:08.999 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:15:08.999 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:08.999 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:08.999 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:08.999 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:08.999 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:08.999 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:08.999 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:08.999 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:15:08.999 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:15:08.999 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:08.999 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:08.999 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=74927 00:15:09.000 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:09.000 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:09.000 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 74927 00:15:09.000 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # '[' -z 74927 ']' 00:15:09.000 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:09.000 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:09.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:09.000 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:09.000 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:09.000 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:09.000 [2024-11-15 10:33:09.772772] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:15:09.000 [2024-11-15 10:33:09.773469] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:09.259 [2024-11-15 10:33:09.920758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:09.259 [2024-11-15 10:33:09.985591] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:09.259 [2024-11-15 10:33:09.985664] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:09.259 [2024-11-15 10:33:09.985691] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:09.259 [2024-11-15 10:33:09.985699] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:09.259 [2024-11-15 10:33:09.985706] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:09.259 [2024-11-15 10:33:09.986918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:09.259 [2024-11-15 10:33:09.987034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:09.259 [2024-11-15 10:33:09.987188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.259 [2024-11-15 10:33:09.987188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:09.259 [2024-11-15 10:33:10.041843] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:09.947 10:33:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:09.947 10:33:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@866 -- # return 0 00:15:09.947 10:33:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:10.207 [2024-11-15 10:33:11.025000] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:10.207 10:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:15:10.207 10:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:10.207 10:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:10.466 10:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:10.725 Malloc1 00:15:10.725 10:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:10.983 10:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:11.243 10:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:11.502 [2024-11-15 10:33:12.167908] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:11.502 10:33:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:15:11.761 10:33:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:15:11.761 10:33:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:15:11.761 10:33:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:15:11.762 10:33:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:15:11.762 10:33:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:11.762 10:33:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:15:11.762 10:33:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:11.762 10:33:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:15:11.762 10:33:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:15:11.762 10:33:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:15:11.762 10:33:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:11.762 10:33:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:15:11.762 10:33:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:15:11.762 10:33:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:15:11.762 10:33:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:15:11.762 10:33:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:15:11.762 10:33:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:15:11.762 10:33:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:11.762 10:33:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:15:11.762 10:33:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:15:11.762 10:33:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:15:11.762 10:33:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:11.762 10:33:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:15:12.020 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:12.020 fio-3.35 00:15:12.020 Starting 1 thread 00:15:14.556 00:15:14.556 test: (groupid=0, jobs=1): err= 0: pid=75010: Fri Nov 15 10:33:14 2024 00:15:14.556 read: IOPS=8699, BW=34.0MiB/s (35.6MB/s)(68.2MiB/2007msec) 00:15:14.556 slat (usec): min=2, max=2978, avg= 2.85, stdev=22.66 00:15:14.556 clat (usec): min=1721, max=14328, avg=7656.89, stdev=631.66 00:15:14.556 lat (usec): min=1753, max=14331, avg=7659.73, stdev=631.59 00:15:14.556 clat percentiles (usec): 00:15:14.556 | 1.00th=[ 6194], 5.00th=[ 6849], 10.00th=[ 7046], 20.00th=[ 7242], 00:15:14.556 | 30.00th=[ 7373], 40.00th=[ 7504], 50.00th=[ 7635], 60.00th=[ 7767], 00:15:14.556 | 70.00th=[ 7898], 80.00th=[ 8029], 90.00th=[ 8291], 95.00th=[ 8455], 00:15:14.556 | 99.00th=[ 9241], 99.50th=[10290], 99.90th=[11863], 99.95th=[12649], 00:15:14.556 | 99.99th=[14353] 00:15:14.556 bw ( KiB/s): min=33712, max=35424, per=99.96%, avg=34784.00, stdev=743.50, samples=4 00:15:14.556 iops : min= 8428, max= 8856, avg=8696.00, stdev=185.87, samples=4 00:15:14.556 write: IOPS=8693, BW=34.0MiB/s (35.6MB/s)(68.2MiB/2007msec); 0 zone resets 00:15:14.556 slat (usec): min=2, max=155, avg= 2.79, stdev= 1.69 00:15:14.556 clat (usec): min=1583, max=14193, avg=7000.57, stdev=629.26 00:15:14.556 lat (usec): min=1592, max=14195, avg=7003.36, stdev=629.26 00:15:14.556 clat percentiles (usec): 00:15:14.556 | 1.00th=[ 5145], 5.00th=[ 6259], 10.00th=[ 6456], 20.00th=[ 6652], 00:15:14.556 | 30.00th=[ 6783], 40.00th=[ 6849], 50.00th=[ 6980], 60.00th=[ 7111], 00:15:14.556 | 70.00th=[ 7177], 80.00th=[ 7373], 90.00th=[ 7570], 95.00th=[ 7767], 00:15:14.556 | 99.00th=[ 8979], 99.50th=[10028], 99.90th=[11469], 99.95th=[11863], 00:15:14.556 | 99.99th=[13042] 00:15:14.556 bw ( KiB/s): min=34560, max=35088, per=100.00%, avg=34778.00, stdev=241.36, samples=4 00:15:14.556 iops : min= 8640, max= 8772, avg=8694.50, stdev=60.34, samples=4 00:15:14.556 lat (msec) : 2=0.03%, 4=0.13%, 10=99.30%, 20=0.54% 00:15:14.556 cpu : usr=68.20%, sys=23.38%, ctx=25, majf=0, minf=7 00:15:14.556 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:14.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:14.556 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:14.556 issued rwts: total=17460,17448,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:14.556 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:14.556 00:15:14.556 Run status group 0 (all jobs): 00:15:14.556 READ: bw=34.0MiB/s (35.6MB/s), 34.0MiB/s-34.0MiB/s (35.6MB/s-35.6MB/s), io=68.2MiB (71.5MB), run=2007-2007msec 00:15:14.556 WRITE: bw=34.0MiB/s (35.6MB/s), 34.0MiB/s-34.0MiB/s (35.6MB/s-35.6MB/s), io=68.2MiB (71.5MB), run=2007-2007msec 00:15:14.556 10:33:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:15:14.556 10:33:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:15:14.556 10:33:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:15:14.556 10:33:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:14.556 10:33:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:15:14.556 10:33:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:14.556 10:33:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:15:14.556 10:33:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:15:14.556 10:33:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:15:14.557 10:33:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:15:14.557 10:33:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:14.557 10:33:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:15:14.557 10:33:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:15:14.557 10:33:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:15:14.557 10:33:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:15:14.557 10:33:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:14.557 10:33:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:15:14.557 10:33:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:15:14.557 10:33:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:15:14.557 10:33:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:15:14.557 10:33:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:14.557 10:33:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:15:14.557 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:15:14.557 fio-3.35 00:15:14.557 Starting 1 thread 00:15:17.091 00:15:17.091 test: (groupid=0, jobs=1): err= 0: pid=75059: Fri Nov 15 10:33:17 2024 00:15:17.091 read: IOPS=7359, BW=115MiB/s (121MB/s)(231MiB/2010msec) 00:15:17.091 slat (usec): min=3, max=127, avg= 4.14, stdev= 2.21 00:15:17.091 clat (usec): min=2351, max=31459, avg=9864.23, stdev=3713.98 00:15:17.091 lat (usec): min=2355, max=31463, avg=9868.36, stdev=3714.17 00:15:17.091 clat percentiles (usec): 00:15:17.091 | 1.00th=[ 4424], 5.00th=[ 5211], 10.00th=[ 5932], 20.00th=[ 6849], 00:15:17.091 | 30.00th=[ 7635], 40.00th=[ 8356], 50.00th=[ 9110], 60.00th=[10028], 00:15:17.091 | 70.00th=[10945], 80.00th=[12125], 90.00th=[15270], 95.00th=[17957], 00:15:17.091 | 99.00th=[20841], 99.50th=[23200], 99.90th=[25560], 99.95th=[25822], 00:15:17.091 | 99.99th=[30540] 00:15:17.091 bw ( KiB/s): min=45952, max=67584, per=49.46%, avg=58240.00, stdev=9209.92, samples=4 00:15:17.091 iops : min= 2872, max= 4224, avg=3640.00, stdev=575.62, samples=4 00:15:17.091 write: IOPS=4031, BW=63.0MiB/s (66.1MB/s)(119MiB/1889msec); 0 zone resets 00:15:17.091 slat (usec): min=35, max=342, avg=40.96, stdev= 9.03 00:15:17.091 clat (usec): min=4996, max=33153, avg=13789.92, stdev=3669.81 00:15:17.091 lat (usec): min=5035, max=33219, avg=13830.89, stdev=3671.71 00:15:17.091 clat percentiles (usec): 00:15:17.091 | 1.00th=[ 8455], 5.00th=[ 9372], 10.00th=[10028], 20.00th=[10814], 00:15:17.091 | 30.00th=[11600], 40.00th=[12387], 50.00th=[13042], 60.00th=[13960], 00:15:17.091 | 70.00th=[15008], 80.00th=[15926], 90.00th=[18482], 95.00th=[21627], 00:15:17.091 | 99.00th=[25822], 99.50th=[29754], 99.90th=[32113], 99.95th=[32375], 00:15:17.091 | 99.99th=[33162] 00:15:17.091 bw ( KiB/s): min=47776, max=68288, per=93.95%, avg=60608.00, stdev=9035.38, samples=4 00:15:17.091 iops : min= 2986, max= 4268, avg=3788.00, stdev=564.71, samples=4 00:15:17.091 lat (msec) : 4=0.31%, 10=42.24%, 20=53.83%, 50=3.61% 00:15:17.091 cpu : usr=81.53%, sys=13.99%, ctx=8, majf=0, minf=6 00:15:17.091 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:15:17.091 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:17.091 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:17.091 issued rwts: total=14792,7616,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:17.091 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:17.091 00:15:17.091 Run status group 0 (all jobs): 00:15:17.091 READ: bw=115MiB/s (121MB/s), 115MiB/s-115MiB/s (121MB/s-121MB/s), io=231MiB (242MB), run=2010-2010msec 00:15:17.091 WRITE: bw=63.0MiB/s (66.1MB/s), 63.0MiB/s-63.0MiB/s (66.1MB/s-66.1MB/s), io=119MiB (125MB), run=1889-1889msec 00:15:17.091 10:33:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:17.091 10:33:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:15:17.091 10:33:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:15:17.091 10:33:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:15:17.091 10:33:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:15:17.091 10:33:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:17.091 10:33:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:15:17.092 10:33:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:17.092 10:33:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:15:17.092 10:33:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:17.092 10:33:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:17.092 rmmod nvme_tcp 00:15:17.092 rmmod nvme_fabrics 00:15:17.092 rmmod nvme_keyring 00:15:17.092 10:33:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:17.092 10:33:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:15:17.092 10:33:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:15:17.092 10:33:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 74927 ']' 00:15:17.092 10:33:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 74927 00:15:17.092 10:33:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # '[' -z 74927 ']' 00:15:17.092 10:33:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # kill -0 74927 00:15:17.092 10:33:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # uname 00:15:17.092 10:33:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:17.092 10:33:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74927 00:15:17.092 killing process with pid 74927 00:15:17.092 10:33:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:17.092 10:33:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:17.092 10:33:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74927' 00:15:17.092 10:33:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@971 -- # kill 74927 00:15:17.092 10:33:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@976 -- # wait 74927 00:15:17.351 10:33:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:17.351 10:33:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:17.351 10:33:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:17.351 10:33:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:15:17.351 10:33:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:15:17.351 10:33:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:15:17.351 10:33:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:17.351 10:33:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:17.351 10:33:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:17.351 10:33:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:17.351 10:33:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:17.351 10:33:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:17.610 10:33:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:17.611 10:33:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:17.611 10:33:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:17.611 10:33:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:17.611 10:33:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:17.611 10:33:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:17.611 10:33:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:17.611 10:33:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:17.611 10:33:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:17.611 10:33:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:17.611 10:33:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:17.611 10:33:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:17.611 10:33:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:17.611 10:33:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:17.611 10:33:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:15:17.611 00:15:17.611 real 0m9.308s 00:15:17.611 user 0m37.024s 00:15:17.611 sys 0m2.537s 00:15:17.611 10:33:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:17.611 10:33:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:17.611 ************************************ 00:15:17.611 END TEST nvmf_fio_host 00:15:17.611 ************************************ 00:15:17.611 10:33:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:17.611 10:33:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:17.611 10:33:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:17.611 10:33:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:17.611 ************************************ 00:15:17.611 START TEST nvmf_failover 00:15:17.611 ************************************ 00:15:17.611 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:17.871 * Looking for test storage... 00:15:17.871 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:17.871 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:17.871 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:17.872 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:15:17.872 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:17.872 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:17.872 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:17.872 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:17.872 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:15:17.872 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:15:17.872 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:15:17.872 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:15:17.872 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:15:17.872 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:15:17.872 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:15:17.872 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:17.872 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:15:17.872 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:15:17.872 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:17.872 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:17.872 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:15:17.872 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:15:17.872 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:17.872 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:15:17.872 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:15:17.872 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:15:17.872 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:15:17.872 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:17.872 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:15:17.872 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:15:17.872 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:17.872 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:17.872 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:15:17.872 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:17.872 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:17.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.872 --rc genhtml_branch_coverage=1 00:15:17.872 --rc genhtml_function_coverage=1 00:15:17.872 --rc genhtml_legend=1 00:15:17.872 --rc geninfo_all_blocks=1 00:15:17.872 --rc geninfo_unexecuted_blocks=1 00:15:17.872 00:15:17.872 ' 00:15:17.872 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:17.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.872 --rc genhtml_branch_coverage=1 00:15:17.872 --rc genhtml_function_coverage=1 00:15:17.872 --rc genhtml_legend=1 00:15:17.872 --rc geninfo_all_blocks=1 00:15:17.872 --rc geninfo_unexecuted_blocks=1 00:15:17.872 00:15:17.872 ' 00:15:17.872 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:17.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.872 --rc genhtml_branch_coverage=1 00:15:17.872 --rc genhtml_function_coverage=1 00:15:17.872 --rc genhtml_legend=1 00:15:17.872 --rc geninfo_all_blocks=1 00:15:17.872 --rc geninfo_unexecuted_blocks=1 00:15:17.872 00:15:17.872 ' 00:15:17.872 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:17.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.872 --rc genhtml_branch_coverage=1 00:15:17.872 --rc genhtml_function_coverage=1 00:15:17.872 --rc genhtml_legend=1 00:15:17.872 --rc geninfo_all_blocks=1 00:15:17.872 --rc geninfo_unexecuted_blocks=1 00:15:17.872 00:15:17.872 ' 00:15:17.872 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:17.872 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:15:17.872 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:17.872 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:17.872 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:17.872 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:17.872 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:17.872 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:17.872 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:17.872 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:17.872 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:17.872 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:17.872 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:15:17.872 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:15:17.872 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:17.872 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:17.872 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:17.872 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:17.872 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:17.872 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:15:17.872 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:17.872 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:17.872 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:17.872 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.872 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.872 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.872 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:15:17.872 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.872 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:15:17.872 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:17.872 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:17.873 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:17.873 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:17.873 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:17.873 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:17.873 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:17.873 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:17.873 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:17.873 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:17.873 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:17.873 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:17.873 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:17.873 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:17.873 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:15:17.873 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:17.873 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:17.873 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:17.873 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:17.873 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:17.873 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:17.873 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:17.873 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:17.873 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:17.873 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:17.873 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:17.873 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:17.873 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:17.873 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:17.873 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:17.873 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:17.873 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:17.873 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:17.873 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:17.873 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:17.873 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:17.873 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:17.873 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:17.873 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:17.873 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:17.873 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:17.873 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:17.873 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:17.873 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:17.873 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:17.873 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:17.873 Cannot find device "nvmf_init_br" 00:15:17.873 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:15:17.873 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:17.873 Cannot find device "nvmf_init_br2" 00:15:17.873 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:15:17.873 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:17.873 Cannot find device "nvmf_tgt_br" 00:15:17.873 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:15:17.873 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:17.873 Cannot find device "nvmf_tgt_br2" 00:15:17.873 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:15:17.873 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:17.873 Cannot find device "nvmf_init_br" 00:15:17.873 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:15:17.873 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:17.873 Cannot find device "nvmf_init_br2" 00:15:17.873 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:15:17.873 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:17.873 Cannot find device "nvmf_tgt_br" 00:15:17.873 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:15:17.873 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:17.873 Cannot find device "nvmf_tgt_br2" 00:15:17.873 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:15:17.873 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:18.133 Cannot find device "nvmf_br" 00:15:18.133 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:15:18.133 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:18.133 Cannot find device "nvmf_init_if" 00:15:18.133 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:15:18.133 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:18.133 Cannot find device "nvmf_init_if2" 00:15:18.133 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:15:18.133 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:18.133 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:18.133 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:15:18.133 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:18.133 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:18.133 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:15:18.133 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:18.133 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:18.133 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:18.133 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:18.133 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:18.133 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:18.133 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:18.133 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:18.133 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:18.133 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:18.133 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:18.133 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:18.133 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:18.133 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:18.133 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:18.133 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:18.133 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:18.133 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:18.133 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:18.133 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:18.133 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:18.133 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:18.133 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:18.133 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:18.133 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:18.133 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:18.133 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:18.133 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:18.133 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:18.133 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:18.393 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:18.393 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:18.393 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:18.393 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:18.393 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.147 ms 00:15:18.393 00:15:18.393 --- 10.0.0.3 ping statistics --- 00:15:18.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:18.393 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:15:18.393 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:18.393 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:18.393 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.088 ms 00:15:18.393 00:15:18.393 --- 10.0.0.4 ping statistics --- 00:15:18.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:18.393 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:15:18.393 10:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:18.393 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:18.393 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:15:18.393 00:15:18.393 --- 10.0.0.1 ping statistics --- 00:15:18.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:18.393 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:15:18.393 10:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:18.393 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:18.393 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.110 ms 00:15:18.393 00:15:18.393 --- 10.0.0.2 ping statistics --- 00:15:18.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:18.393 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:15:18.393 10:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:18.393 10:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:15:18.393 10:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:18.393 10:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:18.394 10:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:18.394 10:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:18.394 10:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:18.394 10:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:18.394 10:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:18.394 10:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:15:18.394 10:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:18.394 10:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:18.394 10:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:18.394 10:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=75320 00:15:18.394 10:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:18.394 10:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 75320 00:15:18.394 10:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 75320 ']' 00:15:18.394 10:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:18.394 10:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:18.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:18.394 10:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:18.394 10:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:18.394 10:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:18.394 [2024-11-15 10:33:19.090588] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:15:18.394 [2024-11-15 10:33:19.090672] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:18.394 [2024-11-15 10:33:19.242109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:18.652 [2024-11-15 10:33:19.317730] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:18.652 [2024-11-15 10:33:19.317802] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:18.652 [2024-11-15 10:33:19.317815] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:18.652 [2024-11-15 10:33:19.317823] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:18.652 [2024-11-15 10:33:19.317831] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:18.652 [2024-11-15 10:33:19.318984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:18.652 [2024-11-15 10:33:19.319103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:18.652 [2024-11-15 10:33:19.319104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:18.652 [2024-11-15 10:33:19.373996] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:19.220 10:33:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:19.220 10:33:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:15:19.220 10:33:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:19.220 10:33:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:19.220 10:33:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:19.478 10:33:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:19.478 10:33:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:19.737 [2024-11-15 10:33:20.464418] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:19.737 10:33:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:19.996 Malloc0 00:15:19.996 10:33:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:20.562 10:33:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:20.821 10:33:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:21.080 [2024-11-15 10:33:21.690822] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:21.080 10:33:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:15:21.338 [2024-11-15 10:33:21.943135] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:15:21.338 10:33:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:15:21.597 [2024-11-15 10:33:22.199490] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:15:21.597 10:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=75383 00:15:21.597 10:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:15:21.597 10:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:21.597 10:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 75383 /var/tmp/bdevperf.sock 00:15:21.597 10:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 75383 ']' 00:15:21.597 10:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:21.597 10:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:21.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:21.597 10:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:21.597 10:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:21.597 10:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:22.532 10:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:22.532 10:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:15:22.532 10:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:23.106 NVMe0n1 00:15:23.106 10:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:23.366 00:15:23.366 10:33:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=75412 00:15:23.366 10:33:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:23.366 10:33:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:15:24.302 10:33:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:24.561 10:33:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:15:27.879 10:33:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:28.138 00:15:28.138 10:33:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:15:28.397 10:33:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:15:31.684 10:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:31.684 [2024-11-15 10:33:32.437141] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:31.684 10:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:15:32.621 10:33:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:15:33.189 10:33:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 75412 00:15:38.459 { 00:15:38.459 "results": [ 00:15:38.459 { 00:15:38.459 "job": "NVMe0n1", 00:15:38.459 "core_mask": "0x1", 00:15:38.459 "workload": "verify", 00:15:38.459 "status": "finished", 00:15:38.459 "verify_range": { 00:15:38.459 "start": 0, 00:15:38.459 "length": 16384 00:15:38.459 }, 00:15:38.459 "queue_depth": 128, 00:15:38.459 "io_size": 4096, 00:15:38.459 "runtime": 15.010408, 00:15:38.459 "iops": 8356.068669152764, 00:15:38.459 "mibps": 32.640893238877986, 00:15:38.459 "io_failed": 3125, 00:15:38.459 "io_timeout": 0, 00:15:38.459 "avg_latency_us": 14912.02833757283, 00:15:38.459 "min_latency_us": 659.0836363636364, 00:15:38.459 "max_latency_us": 21805.614545454544 00:15:38.459 } 00:15:38.459 ], 00:15:38.459 "core_count": 1 00:15:38.459 } 00:15:38.459 10:33:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 75383 00:15:38.459 10:33:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 75383 ']' 00:15:38.459 10:33:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 75383 00:15:38.459 10:33:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:15:38.459 10:33:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:38.459 10:33:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75383 00:15:38.459 killing process with pid 75383 00:15:38.459 10:33:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:38.459 10:33:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:38.459 10:33:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75383' 00:15:38.459 10:33:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 75383 00:15:38.459 10:33:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 75383 00:15:38.725 10:33:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:38.725 [2024-11-15 10:33:22.281595] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:15:38.725 [2024-11-15 10:33:22.281747] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75383 ] 00:15:38.725 [2024-11-15 10:33:22.432228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:38.725 [2024-11-15 10:33:22.515409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.725 [2024-11-15 10:33:22.579532] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:38.725 Running I/O for 15 seconds... 00:15:38.725 6548.00 IOPS, 25.58 MiB/s [2024-11-15T10:33:39.578Z] [2024-11-15 10:33:25.369192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:62624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.725 [2024-11-15 10:33:25.369276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.725 [2024-11-15 10:33:25.369310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:62752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.725 [2024-11-15 10:33:25.369328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.725 [2024-11-15 10:33:25.369346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:62760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.725 [2024-11-15 10:33:25.369361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.725 [2024-11-15 10:33:25.369377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:62768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.725 [2024-11-15 10:33:25.369392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.725 [2024-11-15 10:33:25.369408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:62776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.725 [2024-11-15 10:33:25.369422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.725 [2024-11-15 10:33:25.369438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:62784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.725 [2024-11-15 10:33:25.369452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.725 [2024-11-15 10:33:25.369468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:62792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.725 [2024-11-15 10:33:25.369482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.725 [2024-11-15 10:33:25.369498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:62800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.725 [2024-11-15 10:33:25.369513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.725 [2024-11-15 10:33:25.369529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:62808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.725 [2024-11-15 10:33:25.369543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.725 [2024-11-15 10:33:25.369559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:62816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.725 [2024-11-15 10:33:25.369573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.725 [2024-11-15 10:33:25.369589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:62824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.725 [2024-11-15 10:33:25.369639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.725 [2024-11-15 10:33:25.369657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:62832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.725 [2024-11-15 10:33:25.369671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.725 [2024-11-15 10:33:25.369688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:62840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.725 [2024-11-15 10:33:25.369702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.725 [2024-11-15 10:33:25.369718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:62848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.725 [2024-11-15 10:33:25.369732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.725 [2024-11-15 10:33:25.369748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:62856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.725 [2024-11-15 10:33:25.369762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.725 [2024-11-15 10:33:25.369777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:62864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.725 [2024-11-15 10:33:25.369792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.725 [2024-11-15 10:33:25.369807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:62872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.725 [2024-11-15 10:33:25.369822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.725 [2024-11-15 10:33:25.369841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:62880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.725 [2024-11-15 10:33:25.369856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.725 [2024-11-15 10:33:25.369872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:62888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.725 [2024-11-15 10:33:25.369887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.725 [2024-11-15 10:33:25.369903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:62896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.725 [2024-11-15 10:33:25.369917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.725 [2024-11-15 10:33:25.369933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:62904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.725 [2024-11-15 10:33:25.369947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.726 [2024-11-15 10:33:25.369963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:62912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.726 [2024-11-15 10:33:25.369977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.726 [2024-11-15 10:33:25.369993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:62920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.726 [2024-11-15 10:33:25.370007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.726 [2024-11-15 10:33:25.370032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:62928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.726 [2024-11-15 10:33:25.370047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.726 [2024-11-15 10:33:25.370081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:62936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.726 [2024-11-15 10:33:25.370096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.726 [2024-11-15 10:33:25.370113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:62944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.726 [2024-11-15 10:33:25.370127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.726 [2024-11-15 10:33:25.370142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:62952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.726 [2024-11-15 10:33:25.370157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.726 [2024-11-15 10:33:25.370174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:62960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.726 [2024-11-15 10:33:25.370188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.726 [2024-11-15 10:33:25.370204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:62968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.726 [2024-11-15 10:33:25.370219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.726 [2024-11-15 10:33:25.370234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:62976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.726 [2024-11-15 10:33:25.370249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.726 [2024-11-15 10:33:25.370264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:62984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.726 [2024-11-15 10:33:25.370279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.726 [2024-11-15 10:33:25.370304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:62992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.726 [2024-11-15 10:33:25.370318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.726 [2024-11-15 10:33:25.370335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:63000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.726 [2024-11-15 10:33:25.370349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.726 [2024-11-15 10:33:25.370366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:63008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.726 [2024-11-15 10:33:25.370381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.726 [2024-11-15 10:33:25.370397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:63016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.726 [2024-11-15 10:33:25.370411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.726 [2024-11-15 10:33:25.370427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:63024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.726 [2024-11-15 10:33:25.370442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.726 [2024-11-15 10:33:25.370467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:63032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.726 [2024-11-15 10:33:25.370482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.726 [2024-11-15 10:33:25.370497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:63040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.726 [2024-11-15 10:33:25.370512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.726 [2024-11-15 10:33:25.370528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:63048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.726 [2024-11-15 10:33:25.370542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.726 [2024-11-15 10:33:25.370558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:63056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.726 [2024-11-15 10:33:25.370572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.726 [2024-11-15 10:33:25.370588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:63064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.726 [2024-11-15 10:33:25.370602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.726 [2024-11-15 10:33:25.370617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:63072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.726 [2024-11-15 10:33:25.370632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.726 [2024-11-15 10:33:25.370647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.726 [2024-11-15 10:33:25.370662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.726 [2024-11-15 10:33:25.370678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:63088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.726 [2024-11-15 10:33:25.370692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.726 [2024-11-15 10:33:25.370708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:63096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.726 [2024-11-15 10:33:25.370722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.726 [2024-11-15 10:33:25.370738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:63104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.726 [2024-11-15 10:33:25.370752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.726 [2024-11-15 10:33:25.370768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:63112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.726 [2024-11-15 10:33:25.370782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.726 [2024-11-15 10:33:25.370798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:63120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.726 [2024-11-15 10:33:25.370813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.726 [2024-11-15 10:33:25.370829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:63128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.726 [2024-11-15 10:33:25.370849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.726 [2024-11-15 10:33:25.370867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:63136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.726 [2024-11-15 10:33:25.370882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.726 [2024-11-15 10:33:25.370898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:63144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.726 [2024-11-15 10:33:25.370913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.726 [2024-11-15 10:33:25.370929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:63152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.726 [2024-11-15 10:33:25.370943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.726 [2024-11-15 10:33:25.370959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:63160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.726 [2024-11-15 10:33:25.370973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.726 [2024-11-15 10:33:25.370989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:63168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.726 [2024-11-15 10:33:25.371003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.726 [2024-11-15 10:33:25.371019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:63176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.727 [2024-11-15 10:33:25.371033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.727 [2024-11-15 10:33:25.371063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:63184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.727 [2024-11-15 10:33:25.371090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.727 [2024-11-15 10:33:25.371106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:63192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.727 [2024-11-15 10:33:25.371121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.727 [2024-11-15 10:33:25.371138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:63200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.727 [2024-11-15 10:33:25.371152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.727 [2024-11-15 10:33:25.371168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:63208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.727 [2024-11-15 10:33:25.371182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.727 [2024-11-15 10:33:25.371198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:63216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.727 [2024-11-15 10:33:25.371212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.727 [2024-11-15 10:33:25.371228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:63224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.727 [2024-11-15 10:33:25.371242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.727 [2024-11-15 10:33:25.371267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:63232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.727 [2024-11-15 10:33:25.371290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.727 [2024-11-15 10:33:25.371306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:63240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.727 [2024-11-15 10:33:25.371321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.727 [2024-11-15 10:33:25.371337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:63248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.727 [2024-11-15 10:33:25.371352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.727 [2024-11-15 10:33:25.371367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:63256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.727 [2024-11-15 10:33:25.371382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.727 [2024-11-15 10:33:25.371398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:63264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.727 [2024-11-15 10:33:25.371413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.727 [2024-11-15 10:33:25.371429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:63272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.727 [2024-11-15 10:33:25.371444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.727 [2024-11-15 10:33:25.371459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:63280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.727 [2024-11-15 10:33:25.371487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.727 [2024-11-15 10:33:25.371506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:63288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.727 [2024-11-15 10:33:25.371520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.727 [2024-11-15 10:33:25.371536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:63296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.727 [2024-11-15 10:33:25.371550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.727 [2024-11-15 10:33:25.371566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:63304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.727 [2024-11-15 10:33:25.371581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.727 [2024-11-15 10:33:25.371596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:63312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.727 [2024-11-15 10:33:25.371620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.727 [2024-11-15 10:33:25.371635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:63320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.727 [2024-11-15 10:33:25.371662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.727 [2024-11-15 10:33:25.371678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:63328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.727 [2024-11-15 10:33:25.371700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.727 [2024-11-15 10:33:25.371717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:63336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.727 [2024-11-15 10:33:25.371732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.727 [2024-11-15 10:33:25.371747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:63344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.727 [2024-11-15 10:33:25.371762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.727 [2024-11-15 10:33:25.371778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.727 [2024-11-15 10:33:25.371792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.727 [2024-11-15 10:33:25.371808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:63360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.727 [2024-11-15 10:33:25.371828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.727 [2024-11-15 10:33:25.371844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:63368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.727 [2024-11-15 10:33:25.371859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.727 [2024-11-15 10:33:25.371875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:63376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.727 [2024-11-15 10:33:25.371890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.727 [2024-11-15 10:33:25.371905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:63384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.727 [2024-11-15 10:33:25.371920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.727 [2024-11-15 10:33:25.371937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:63392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.727 [2024-11-15 10:33:25.371953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.727 [2024-11-15 10:33:25.371968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:63400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.727 [2024-11-15 10:33:25.371983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.727 [2024-11-15 10:33:25.371999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:63408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.727 [2024-11-15 10:33:25.372013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.727 [2024-11-15 10:33:25.372030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:63416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.727 [2024-11-15 10:33:25.372044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.727 [2024-11-15 10:33:25.372073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:63424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.727 [2024-11-15 10:33:25.372089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.728 [2024-11-15 10:33:25.372105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:63432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.728 [2024-11-15 10:33:25.372127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.728 [2024-11-15 10:33:25.372144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:63440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.728 [2024-11-15 10:33:25.372159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.728 [2024-11-15 10:33:25.372175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:63448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.728 [2024-11-15 10:33:25.372189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.728 [2024-11-15 10:33:25.372205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:63456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.728 [2024-11-15 10:33:25.372219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.728 [2024-11-15 10:33:25.372236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:63464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.728 [2024-11-15 10:33:25.372250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.728 [2024-11-15 10:33:25.372266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:63472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.728 [2024-11-15 10:33:25.372280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.728 [2024-11-15 10:33:25.372296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:63480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.728 [2024-11-15 10:33:25.372310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.728 [2024-11-15 10:33:25.372326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:63488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.728 [2024-11-15 10:33:25.372345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.728 [2024-11-15 10:33:25.372362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:63496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.728 [2024-11-15 10:33:25.372377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.728 [2024-11-15 10:33:25.372393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:63504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.728 [2024-11-15 10:33:25.372407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.728 [2024-11-15 10:33:25.372423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:63512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.728 [2024-11-15 10:33:25.372437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.728 [2024-11-15 10:33:25.372453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:63520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.728 [2024-11-15 10:33:25.372468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.728 [2024-11-15 10:33:25.372484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:63528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.728 [2024-11-15 10:33:25.372498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.728 [2024-11-15 10:33:25.372521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:63536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.728 [2024-11-15 10:33:25.372536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.728 [2024-11-15 10:33:25.372551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:63544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.728 [2024-11-15 10:33:25.372566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.728 [2024-11-15 10:33:25.372582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:63552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.728 [2024-11-15 10:33:25.372597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.728 [2024-11-15 10:33:25.372612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:63560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.728 [2024-11-15 10:33:25.372627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.728 [2024-11-15 10:33:25.372643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:63568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.728 [2024-11-15 10:33:25.372657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.728 [2024-11-15 10:33:25.372673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:63576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.728 [2024-11-15 10:33:25.372687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.728 [2024-11-15 10:33:25.372704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.728 [2024-11-15 10:33:25.372718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.728 [2024-11-15 10:33:25.372734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:63592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.728 [2024-11-15 10:33:25.372749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.728 [2024-11-15 10:33:25.372765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:63600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.728 [2024-11-15 10:33:25.372779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.728 [2024-11-15 10:33:25.372795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:63608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.728 [2024-11-15 10:33:25.372809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.728 [2024-11-15 10:33:25.372825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:63616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.728 [2024-11-15 10:33:25.372844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.728 [2024-11-15 10:33:25.372860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:63624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.728 [2024-11-15 10:33:25.372874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.728 [2024-11-15 10:33:25.372890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:62632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.728 [2024-11-15 10:33:25.372911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.728 [2024-11-15 10:33:25.372928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:62640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.728 [2024-11-15 10:33:25.372943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.728 [2024-11-15 10:33:25.372959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:62648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.728 [2024-11-15 10:33:25.372973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.728 [2024-11-15 10:33:25.372988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:62656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.728 [2024-11-15 10:33:25.373003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.728 [2024-11-15 10:33:25.373019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:62664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.728 [2024-11-15 10:33:25.373033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.728 [2024-11-15 10:33:25.373060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:62672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.728 [2024-11-15 10:33:25.373077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.728 [2024-11-15 10:33:25.373094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:62680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.728 [2024-11-15 10:33:25.373108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.728 [2024-11-15 10:33:25.373125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:62688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.728 [2024-11-15 10:33:25.373139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.729 [2024-11-15 10:33:25.373155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:62696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.729 [2024-11-15 10:33:25.373169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.729 [2024-11-15 10:33:25.373185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:62704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.729 [2024-11-15 10:33:25.373200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.729 [2024-11-15 10:33:25.373217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:62712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.729 [2024-11-15 10:33:25.373231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.729 [2024-11-15 10:33:25.373247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:62720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.729 [2024-11-15 10:33:25.373272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.729 [2024-11-15 10:33:25.373288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:62728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.729 [2024-11-15 10:33:25.373302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.729 [2024-11-15 10:33:25.373326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:62736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.729 [2024-11-15 10:33:25.373342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.729 [2024-11-15 10:33:25.373358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:62744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.729 [2024-11-15 10:33:25.373378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.729 [2024-11-15 10:33:25.373394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:63632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.729 [2024-11-15 10:33:25.373409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.729 [2024-11-15 10:33:25.373424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1042cb0 is same with the state(6) to be set 00:15:38.729 [2024-11-15 10:33:25.373441] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:38.729 [2024-11-15 10:33:25.373452] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:38.729 [2024-11-15 10:33:25.373464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63640 len:8 PRP1 0x0 PRP2 0x0 00:15:38.729 [2024-11-15 10:33:25.373478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.729 [2024-11-15 10:33:25.373542] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:15:38.729 [2024-11-15 10:33:25.373600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:38.729 [2024-11-15 10:33:25.373622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.729 [2024-11-15 10:33:25.373638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:38.729 [2024-11-15 10:33:25.373652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.729 [2024-11-15 10:33:25.373667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:38.729 [2024-11-15 10:33:25.373681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.729 [2024-11-15 10:33:25.373696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:38.729 [2024-11-15 10:33:25.373710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.729 [2024-11-15 10:33:25.373725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:15:38.729 [2024-11-15 10:33:25.373771] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa6710 (9): Bad file descriptor 00:15:38.729 [2024-11-15 10:33:25.377597] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:15:38.729 [2024-11-15 10:33:25.401489] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:15:38.729 6821.50 IOPS, 26.65 MiB/s [2024-11-15T10:33:39.582Z] 7098.67 IOPS, 27.73 MiB/s [2024-11-15T10:33:39.582Z] 7171.75 IOPS, 28.01 MiB/s [2024-11-15T10:33:39.582Z] [2024-11-15 10:33:29.114608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.729 [2024-11-15 10:33:29.114693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.729 [2024-11-15 10:33:29.114766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:32192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.729 [2024-11-15 10:33:29.114790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.729 [2024-11-15 10:33:29.114811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:32200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.729 [2024-11-15 10:33:29.114830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.729 [2024-11-15 10:33:29.114850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:32208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.729 [2024-11-15 10:33:29.114868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.729 [2024-11-15 10:33:29.114888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:32216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.729 [2024-11-15 10:33:29.114906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.729 [2024-11-15 10:33:29.114926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:32224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.729 [2024-11-15 10:33:29.114945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.729 [2024-11-15 10:33:29.114965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:31544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.729 [2024-11-15 10:33:29.114982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.729 [2024-11-15 10:33:29.115002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:31552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.729 [2024-11-15 10:33:29.115020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.729 [2024-11-15 10:33:29.115039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:31560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.729 [2024-11-15 10:33:29.115074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.729 [2024-11-15 10:33:29.115095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:31568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.729 [2024-11-15 10:33:29.115114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.729 [2024-11-15 10:33:29.115134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:31576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.729 [2024-11-15 10:33:29.115151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.729 [2024-11-15 10:33:29.115171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:31584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.729 [2024-11-15 10:33:29.115188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.729 [2024-11-15 10:33:29.115208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:31592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.729 [2024-11-15 10:33:29.115225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.729 [2024-11-15 10:33:29.115245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:31600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.729 [2024-11-15 10:33:29.115262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.729 [2024-11-15 10:33:29.115294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.730 [2024-11-15 10:33:29.115312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.730 [2024-11-15 10:33:29.115333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:31616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.730 [2024-11-15 10:33:29.115351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.730 [2024-11-15 10:33:29.115371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:31624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.730 [2024-11-15 10:33:29.115389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.730 [2024-11-15 10:33:29.115411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:31632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.730 [2024-11-15 10:33:29.115430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.730 [2024-11-15 10:33:29.115450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:31640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.730 [2024-11-15 10:33:29.115482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.730 [2024-11-15 10:33:29.115506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:31648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.730 [2024-11-15 10:33:29.115524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.730 [2024-11-15 10:33:29.115543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:31656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.730 [2024-11-15 10:33:29.115561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.730 [2024-11-15 10:33:29.115581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:31664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.730 [2024-11-15 10:33:29.115598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.730 [2024-11-15 10:33:29.115618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.730 [2024-11-15 10:33:29.115636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.730 [2024-11-15 10:33:29.115655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:32240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.730 [2024-11-15 10:33:29.115673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.730 [2024-11-15 10:33:29.115692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:32248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.730 [2024-11-15 10:33:29.115710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.730 [2024-11-15 10:33:29.115730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:32256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.730 [2024-11-15 10:33:29.115747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.730 [2024-11-15 10:33:29.115767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:32264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.730 [2024-11-15 10:33:29.115833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.730 [2024-11-15 10:33:29.115855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:32272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.730 [2024-11-15 10:33:29.115873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.730 [2024-11-15 10:33:29.115893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:32280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.730 [2024-11-15 10:33:29.115911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.730 [2024-11-15 10:33:29.115931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:32288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.730 [2024-11-15 10:33:29.115949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.730 [2024-11-15 10:33:29.115968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:32296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.730 [2024-11-15 10:33:29.115986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.730 [2024-11-15 10:33:29.116005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:32304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.730 [2024-11-15 10:33:29.116023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.730 [2024-11-15 10:33:29.116042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:31672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.730 [2024-11-15 10:33:29.116075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.730 [2024-11-15 10:33:29.116097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:31680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.730 [2024-11-15 10:33:29.116115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.730 [2024-11-15 10:33:29.116135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:31688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.730 [2024-11-15 10:33:29.116153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.730 [2024-11-15 10:33:29.116173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:31696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.730 [2024-11-15 10:33:29.116191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.730 [2024-11-15 10:33:29.116210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:31704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.730 [2024-11-15 10:33:29.116228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.730 [2024-11-15 10:33:29.116248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:31712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.730 [2024-11-15 10:33:29.116265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.730 [2024-11-15 10:33:29.116285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.730 [2024-11-15 10:33:29.116303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.730 [2024-11-15 10:33:29.116332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:31728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.730 [2024-11-15 10:33:29.116351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.730 [2024-11-15 10:33:29.116370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:31736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.731 [2024-11-15 10:33:29.116388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.731 [2024-11-15 10:33:29.116408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:31744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.731 [2024-11-15 10:33:29.116425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.731 [2024-11-15 10:33:29.116445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:31752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.731 [2024-11-15 10:33:29.116463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.731 [2024-11-15 10:33:29.116482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:31760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.731 [2024-11-15 10:33:29.116500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.731 [2024-11-15 10:33:29.116519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:31768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.731 [2024-11-15 10:33:29.116537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.731 [2024-11-15 10:33:29.116557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:31776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.731 [2024-11-15 10:33:29.116574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.731 [2024-11-15 10:33:29.116594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:31784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.731 [2024-11-15 10:33:29.116612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.731 [2024-11-15 10:33:29.116631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:31792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.731 [2024-11-15 10:33:29.116649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.731 [2024-11-15 10:33:29.116669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:32312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.731 [2024-11-15 10:33:29.116686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.731 [2024-11-15 10:33:29.116707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:32320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.731 [2024-11-15 10:33:29.116735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.731 [2024-11-15 10:33:29.116755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:32328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.731 [2024-11-15 10:33:29.116773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.731 [2024-11-15 10:33:29.116792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:32336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.731 [2024-11-15 10:33:29.116817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.731 [2024-11-15 10:33:29.116838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:32344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.731 [2024-11-15 10:33:29.116856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.731 [2024-11-15 10:33:29.116876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:32352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.731 [2024-11-15 10:33:29.116893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.731 [2024-11-15 10:33:29.116913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:32360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.731 [2024-11-15 10:33:29.116931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.731 [2024-11-15 10:33:29.116950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:32368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.731 [2024-11-15 10:33:29.116968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.731 [2024-11-15 10:33:29.116987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:31800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.731 [2024-11-15 10:33:29.117005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.731 [2024-11-15 10:33:29.117024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:31808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.731 [2024-11-15 10:33:29.117042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.731 [2024-11-15 10:33:29.117078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:31816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.731 [2024-11-15 10:33:29.117097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.731 [2024-11-15 10:33:29.117116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:31824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.731 [2024-11-15 10:33:29.117134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.731 [2024-11-15 10:33:29.117153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:31832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.731 [2024-11-15 10:33:29.117171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.731 [2024-11-15 10:33:29.117190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:31840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.731 [2024-11-15 10:33:29.117208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.731 [2024-11-15 10:33:29.117228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:31848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.731 [2024-11-15 10:33:29.117246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.731 [2024-11-15 10:33:29.117266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:31856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.731 [2024-11-15 10:33:29.117283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.731 [2024-11-15 10:33:29.117303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:31864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.731 [2024-11-15 10:33:29.117328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.731 [2024-11-15 10:33:29.117350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:31872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.731 [2024-11-15 10:33:29.117368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.731 [2024-11-15 10:33:29.117389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:31880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.731 [2024-11-15 10:33:29.117406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.731 [2024-11-15 10:33:29.117426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:31888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.731 [2024-11-15 10:33:29.117444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.731 [2024-11-15 10:33:29.117463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.731 [2024-11-15 10:33:29.117481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.731 [2024-11-15 10:33:29.117501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:31904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.731 [2024-11-15 10:33:29.117519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.731 [2024-11-15 10:33:29.117538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:31912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.731 [2024-11-15 10:33:29.117556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.732 [2024-11-15 10:33:29.117576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:31920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.732 [2024-11-15 10:33:29.117594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.732 [2024-11-15 10:33:29.117613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:31928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.732 [2024-11-15 10:33:29.117631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.732 [2024-11-15 10:33:29.117650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:31936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.732 [2024-11-15 10:33:29.117668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.732 [2024-11-15 10:33:29.117688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:31944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.732 [2024-11-15 10:33:29.117705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.732 [2024-11-15 10:33:29.117725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:31952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.732 [2024-11-15 10:33:29.117742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.732 [2024-11-15 10:33:29.117762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:31960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.732 [2024-11-15 10:33:29.117779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.732 [2024-11-15 10:33:29.117806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:31968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.732 [2024-11-15 10:33:29.117824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.732 [2024-11-15 10:33:29.117844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:31976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.732 [2024-11-15 10:33:29.117862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.732 [2024-11-15 10:33:29.117881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:31984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.732 [2024-11-15 10:33:29.117899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.732 [2024-11-15 10:33:29.117918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:32376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.732 [2024-11-15 10:33:29.117936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.732 [2024-11-15 10:33:29.117957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.732 [2024-11-15 10:33:29.117976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.732 [2024-11-15 10:33:29.117996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:32392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.732 [2024-11-15 10:33:29.118013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.732 [2024-11-15 10:33:29.118033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.732 [2024-11-15 10:33:29.118065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.732 [2024-11-15 10:33:29.118088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.732 [2024-11-15 10:33:29.118107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.732 [2024-11-15 10:33:29.118127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:32416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.732 [2024-11-15 10:33:29.118144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.732 [2024-11-15 10:33:29.118164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:32424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.732 [2024-11-15 10:33:29.118181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.732 [2024-11-15 10:33:29.118201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:32432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.732 [2024-11-15 10:33:29.118219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.732 [2024-11-15 10:33:29.118239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:32440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.732 [2024-11-15 10:33:29.118256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.732 [2024-11-15 10:33:29.118275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.732 [2024-11-15 10:33:29.118302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.732 [2024-11-15 10:33:29.118323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:32456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.732 [2024-11-15 10:33:29.118341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.732 [2024-11-15 10:33:29.118360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.732 [2024-11-15 10:33:29.118378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.732 [2024-11-15 10:33:29.118397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:32472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.732 [2024-11-15 10:33:29.118415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.732 [2024-11-15 10:33:29.118434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:32480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.732 [2024-11-15 10:33:29.118452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.732 [2024-11-15 10:33:29.118471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:32488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.732 [2024-11-15 10:33:29.118489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.732 [2024-11-15 10:33:29.118508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:32496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.732 [2024-11-15 10:33:29.118526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.732 [2024-11-15 10:33:29.118546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:31992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.732 [2024-11-15 10:33:29.118564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.732 [2024-11-15 10:33:29.118588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:32000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.732 [2024-11-15 10:33:29.118606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.732 [2024-11-15 10:33:29.118626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:32008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.732 [2024-11-15 10:33:29.118644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.732 [2024-11-15 10:33:29.118663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:32016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.732 [2024-11-15 10:33:29.118681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.732 [2024-11-15 10:33:29.118700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:32024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.732 [2024-11-15 10:33:29.118718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.733 [2024-11-15 10:33:29.118737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.733 [2024-11-15 10:33:29.118755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.733 [2024-11-15 10:33:29.118781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:32040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.733 [2024-11-15 10:33:29.118800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.733 [2024-11-15 10:33:29.118819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:32048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.733 [2024-11-15 10:33:29.118837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.733 [2024-11-15 10:33:29.118856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:32504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.733 [2024-11-15 10:33:29.118874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.733 [2024-11-15 10:33:29.118901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:32512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.733 [2024-11-15 10:33:29.118920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.733 [2024-11-15 10:33:29.118939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:32520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.733 [2024-11-15 10:33:29.118957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.733 [2024-11-15 10:33:29.118977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:32528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.733 [2024-11-15 10:33:29.118994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.733 [2024-11-15 10:33:29.119013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:32536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.733 [2024-11-15 10:33:29.119031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.733 [2024-11-15 10:33:29.119064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:32544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.733 [2024-11-15 10:33:29.119091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.733 [2024-11-15 10:33:29.119112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:32552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.733 [2024-11-15 10:33:29.119130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.733 [2024-11-15 10:33:29.119150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:32560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.733 [2024-11-15 10:33:29.119167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.733 [2024-11-15 10:33:29.119187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:32056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.733 [2024-11-15 10:33:29.119204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.733 [2024-11-15 10:33:29.119224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:32064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.733 [2024-11-15 10:33:29.119241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.733 [2024-11-15 10:33:29.119261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.733 [2024-11-15 10:33:29.119287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.733 [2024-11-15 10:33:29.119308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:32080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.733 [2024-11-15 10:33:29.119326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.733 [2024-11-15 10:33:29.119346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:32088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.733 [2024-11-15 10:33:29.119364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.733 [2024-11-15 10:33:29.119383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:32096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.733 [2024-11-15 10:33:29.119401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.733 [2024-11-15 10:33:29.119421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:32104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.733 [2024-11-15 10:33:29.119438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.733 [2024-11-15 10:33:29.119458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:32112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.733 [2024-11-15 10:33:29.119490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.733 [2024-11-15 10:33:29.119512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:32120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.733 [2024-11-15 10:33:29.119530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.733 [2024-11-15 10:33:29.119555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:32128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.733 [2024-11-15 10:33:29.119573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.733 [2024-11-15 10:33:29.119592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:32136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.733 [2024-11-15 10:33:29.119610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.733 [2024-11-15 10:33:29.119629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:32144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.733 [2024-11-15 10:33:29.119647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.733 [2024-11-15 10:33:29.119667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:32152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.733 [2024-11-15 10:33:29.119684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.733 [2024-11-15 10:33:29.119704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:32160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.733 [2024-11-15 10:33:29.119732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.733 [2024-11-15 10:33:29.119753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:32168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.733 [2024-11-15 10:33:29.119770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.733 [2024-11-15 10:33:29.119789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1180930 is same with the state(6) to be set 00:15:38.733 [2024-11-15 10:33:29.119819] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:38.733 [2024-11-15 10:33:29.119833] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:38.733 [2024-11-15 10:33:29.119847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32176 len:8 PRP1 0x0 PRP2 0x0 00:15:38.733 [2024-11-15 10:33:29.119864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.733 [2024-11-15 10:33:29.119934] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:15:38.733 [2024-11-15 10:33:29.120002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:38.733 [2024-11-15 10:33:29.120027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.733 [2024-11-15 10:33:29.120047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:38.733 [2024-11-15 10:33:29.120082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.734 [2024-11-15 10:33:29.120101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:38.734 [2024-11-15 10:33:29.120118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.734 [2024-11-15 10:33:29.120136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:38.734 [2024-11-15 10:33:29.120153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.734 [2024-11-15 10:33:29.120170] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:15:38.734 [2024-11-15 10:33:29.120211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa6710 (9): Bad file descriptor 00:15:38.734 [2024-11-15 10:33:29.125110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:15:38.734 [2024-11-15 10:33:29.159739] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:15:38.734 7155.00 IOPS, 27.95 MiB/s [2024-11-15T10:33:39.587Z] 7466.17 IOPS, 29.16 MiB/s [2024-11-15T10:33:39.587Z] 7697.86 IOPS, 30.07 MiB/s [2024-11-15T10:33:39.587Z] 7876.38 IOPS, 30.77 MiB/s [2024-11-15T10:33:39.587Z] 8003.22 IOPS, 31.26 MiB/s [2024-11-15T10:33:39.587Z] [2024-11-15 10:33:33.737717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:109784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.734 [2024-11-15 10:33:33.737791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.734 [2024-11-15 10:33:33.737823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:109792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.734 [2024-11-15 10:33:33.737840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.734 [2024-11-15 10:33:33.737858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:109800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.734 [2024-11-15 10:33:33.737873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.734 [2024-11-15 10:33:33.737889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:109808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.734 [2024-11-15 10:33:33.737903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.734 [2024-11-15 10:33:33.737947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:109816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.734 [2024-11-15 10:33:33.737962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.734 [2024-11-15 10:33:33.737978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:109824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.734 [2024-11-15 10:33:33.737992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.734 [2024-11-15 10:33:33.738008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:109832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.734 [2024-11-15 10:33:33.738022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.734 [2024-11-15 10:33:33.738037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:109840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.734 [2024-11-15 10:33:33.738065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.734 [2024-11-15 10:33:33.738085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:109144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.734 [2024-11-15 10:33:33.738100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.734 [2024-11-15 10:33:33.738115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:109152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.734 [2024-11-15 10:33:33.738129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.734 [2024-11-15 10:33:33.738144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:109160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.734 [2024-11-15 10:33:33.738158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.734 [2024-11-15 10:33:33.738174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:109168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.734 [2024-11-15 10:33:33.738187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.734 [2024-11-15 10:33:33.738203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:109176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.734 [2024-11-15 10:33:33.738217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.734 [2024-11-15 10:33:33.738239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:109184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.734 [2024-11-15 10:33:33.738253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.734 [2024-11-15 10:33:33.738269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:109192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.734 [2024-11-15 10:33:33.738283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.734 [2024-11-15 10:33:33.738298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:109200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.734 [2024-11-15 10:33:33.738312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.734 [2024-11-15 10:33:33.738328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:109848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.734 [2024-11-15 10:33:33.738410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.734 [2024-11-15 10:33:33.738435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:109856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.734 [2024-11-15 10:33:33.738451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.734 [2024-11-15 10:33:33.738467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:109864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.734 [2024-11-15 10:33:33.738482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.734 [2024-11-15 10:33:33.738498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:109872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.734 [2024-11-15 10:33:33.738512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.734 [2024-11-15 10:33:33.738528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:109880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.734 [2024-11-15 10:33:33.738542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.734 [2024-11-15 10:33:33.738562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:109888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.734 [2024-11-15 10:33:33.738576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.734 [2024-11-15 10:33:33.738593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:109896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.734 [2024-11-15 10:33:33.738607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.734 [2024-11-15 10:33:33.738622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:109904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.734 [2024-11-15 10:33:33.738637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.734 [2024-11-15 10:33:33.738653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:109208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.734 [2024-11-15 10:33:33.738667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.734 [2024-11-15 10:33:33.738682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:109216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.734 [2024-11-15 10:33:33.738697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.734 [2024-11-15 10:33:33.738712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:109224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.734 [2024-11-15 10:33:33.738726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.735 [2024-11-15 10:33:33.738742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:109232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.735 [2024-11-15 10:33:33.738756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.735 [2024-11-15 10:33:33.738772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:109240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.735 [2024-11-15 10:33:33.738786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.735 [2024-11-15 10:33:33.738802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:109248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.735 [2024-11-15 10:33:33.738825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.735 [2024-11-15 10:33:33.738842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:109256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.735 [2024-11-15 10:33:33.738856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.735 [2024-11-15 10:33:33.738872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:109264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.735 [2024-11-15 10:33:33.738886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.735 [2024-11-15 10:33:33.738901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:109272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.735 [2024-11-15 10:33:33.738915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.735 [2024-11-15 10:33:33.738932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:109280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.735 [2024-11-15 10:33:33.738946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.735 [2024-11-15 10:33:33.738962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:109288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.735 [2024-11-15 10:33:33.738977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.735 [2024-11-15 10:33:33.738992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:109296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.735 [2024-11-15 10:33:33.739006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.735 [2024-11-15 10:33:33.739022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:109304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.735 [2024-11-15 10:33:33.739036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.735 [2024-11-15 10:33:33.739067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:109312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.735 [2024-11-15 10:33:33.739085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.735 [2024-11-15 10:33:33.739101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:109320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.735 [2024-11-15 10:33:33.739116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.735 [2024-11-15 10:33:33.739131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:109328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.735 [2024-11-15 10:33:33.739146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.735 [2024-11-15 10:33:33.739162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:109912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.735 [2024-11-15 10:33:33.739176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.735 [2024-11-15 10:33:33.739192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:109920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.735 [2024-11-15 10:33:33.739206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.735 [2024-11-15 10:33:33.739246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:109928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.735 [2024-11-15 10:33:33.739261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.735 [2024-11-15 10:33:33.739277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:109936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.735 [2024-11-15 10:33:33.739292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.735 [2024-11-15 10:33:33.739307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:109944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.735 [2024-11-15 10:33:33.739322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.735 [2024-11-15 10:33:33.739338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:109952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.735 [2024-11-15 10:33:33.739352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.735 [2024-11-15 10:33:33.739377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:109960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.735 [2024-11-15 10:33:33.739392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.735 [2024-11-15 10:33:33.739408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:109968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.735 [2024-11-15 10:33:33.739422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.735 [2024-11-15 10:33:33.739438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:109336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.735 [2024-11-15 10:33:33.739452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.735 [2024-11-15 10:33:33.739481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:109344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.735 [2024-11-15 10:33:33.739498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.735 [2024-11-15 10:33:33.739514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:109352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.735 [2024-11-15 10:33:33.739528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.735 [2024-11-15 10:33:33.739544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:109360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.735 [2024-11-15 10:33:33.739558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.735 [2024-11-15 10:33:33.739574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:109368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.735 [2024-11-15 10:33:33.739589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.735 [2024-11-15 10:33:33.739604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:109376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.735 [2024-11-15 10:33:33.739619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.736 [2024-11-15 10:33:33.739634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:109384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.736 [2024-11-15 10:33:33.739656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.736 [2024-11-15 10:33:33.739673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:109392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.736 [2024-11-15 10:33:33.739687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.736 [2024-11-15 10:33:33.739703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:109400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.736 [2024-11-15 10:33:33.739718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.736 [2024-11-15 10:33:33.739734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:109408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.736 [2024-11-15 10:33:33.739749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.736 [2024-11-15 10:33:33.739765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:109416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.736 [2024-11-15 10:33:33.739779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.736 [2024-11-15 10:33:33.739794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:109424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.736 [2024-11-15 10:33:33.739809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.736 [2024-11-15 10:33:33.739825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:109432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.736 [2024-11-15 10:33:33.739839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.736 [2024-11-15 10:33:33.739855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:109440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.736 [2024-11-15 10:33:33.739869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.736 [2024-11-15 10:33:33.739885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:109448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.736 [2024-11-15 10:33:33.739900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.736 [2024-11-15 10:33:33.739916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:109456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.736 [2024-11-15 10:33:33.739930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.736 [2024-11-15 10:33:33.739945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:109464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.736 [2024-11-15 10:33:33.739959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.736 [2024-11-15 10:33:33.739976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:109472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.736 [2024-11-15 10:33:33.739990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.736 [2024-11-15 10:33:33.740006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:109480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.736 [2024-11-15 10:33:33.740020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.736 [2024-11-15 10:33:33.740043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:109488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.736 [2024-11-15 10:33:33.740070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.736 [2024-11-15 10:33:33.740093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:109496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.736 [2024-11-15 10:33:33.740115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.736 [2024-11-15 10:33:33.740130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:109504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.736 [2024-11-15 10:33:33.740145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.736 [2024-11-15 10:33:33.740161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:109512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.736 [2024-11-15 10:33:33.740175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.736 [2024-11-15 10:33:33.740191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:109520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.736 [2024-11-15 10:33:33.740205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.736 [2024-11-15 10:33:33.740221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:109976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.736 [2024-11-15 10:33:33.740235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.736 [2024-11-15 10:33:33.740250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:109984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.736 [2024-11-15 10:33:33.740265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.736 [2024-11-15 10:33:33.740280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:109992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.736 [2024-11-15 10:33:33.740294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.736 [2024-11-15 10:33:33.740310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:110000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.736 [2024-11-15 10:33:33.740324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.736 [2024-11-15 10:33:33.740340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:110008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.736 [2024-11-15 10:33:33.740354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.736 [2024-11-15 10:33:33.740370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:110016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.736 [2024-11-15 10:33:33.740384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.736 [2024-11-15 10:33:33.740400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:110024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.736 [2024-11-15 10:33:33.740415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.737 [2024-11-15 10:33:33.740431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:110032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.737 [2024-11-15 10:33:33.740456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.737 [2024-11-15 10:33:33.740473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:109528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.737 [2024-11-15 10:33:33.740487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.737 [2024-11-15 10:33:33.740512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:109536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.737 [2024-11-15 10:33:33.740527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.737 [2024-11-15 10:33:33.740543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:109544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.737 [2024-11-15 10:33:33.740558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.737 [2024-11-15 10:33:33.740573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:109552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.737 [2024-11-15 10:33:33.740588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.737 [2024-11-15 10:33:33.740604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:109560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.737 [2024-11-15 10:33:33.740618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.737 [2024-11-15 10:33:33.740635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:109568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.737 [2024-11-15 10:33:33.740651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.737 [2024-11-15 10:33:33.740677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:109576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.737 [2024-11-15 10:33:33.740692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.737 [2024-11-15 10:33:33.740708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:109584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.737 [2024-11-15 10:33:33.740722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.737 [2024-11-15 10:33:33.740738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:109592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.737 [2024-11-15 10:33:33.740752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.737 [2024-11-15 10:33:33.740768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:109600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.737 [2024-11-15 10:33:33.740782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.737 [2024-11-15 10:33:33.740798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:109608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.737 [2024-11-15 10:33:33.740812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.737 [2024-11-15 10:33:33.740828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:109616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.737 [2024-11-15 10:33:33.740842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.737 [2024-11-15 10:33:33.740864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:109624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.737 [2024-11-15 10:33:33.740879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.737 [2024-11-15 10:33:33.740895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:109632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.737 [2024-11-15 10:33:33.740909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.737 [2024-11-15 10:33:33.740925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:109640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.737 [2024-11-15 10:33:33.740940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.737 [2024-11-15 10:33:33.740956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:109648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.737 [2024-11-15 10:33:33.740970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.737 [2024-11-15 10:33:33.740986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:110040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.737 [2024-11-15 10:33:33.741000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.737 [2024-11-15 10:33:33.741016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:110048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.737 [2024-11-15 10:33:33.741030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.737 [2024-11-15 10:33:33.741046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:110056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.737 [2024-11-15 10:33:33.741073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.737 [2024-11-15 10:33:33.741090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:110064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.737 [2024-11-15 10:33:33.741104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.737 [2024-11-15 10:33:33.741120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:110072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.737 [2024-11-15 10:33:33.741134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.737 [2024-11-15 10:33:33.741150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.737 [2024-11-15 10:33:33.741164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.737 [2024-11-15 10:33:33.741185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:110088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.737 [2024-11-15 10:33:33.741200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.737 [2024-11-15 10:33:33.741216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:110096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.737 [2024-11-15 10:33:33.741235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.737 [2024-11-15 10:33:33.741251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:109656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.737 [2024-11-15 10:33:33.741273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.737 [2024-11-15 10:33:33.741290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:109664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.737 [2024-11-15 10:33:33.741304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.737 [2024-11-15 10:33:33.741321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:109672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.737 [2024-11-15 10:33:33.741335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.738 [2024-11-15 10:33:33.741351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:109680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.738 [2024-11-15 10:33:33.741366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.738 [2024-11-15 10:33:33.741381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:109688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.738 [2024-11-15 10:33:33.741395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.738 [2024-11-15 10:33:33.741412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:109696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.738 [2024-11-15 10:33:33.741426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.738 [2024-11-15 10:33:33.741442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:109704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.738 [2024-11-15 10:33:33.741457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.738 [2024-11-15 10:33:33.741472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:109712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.738 [2024-11-15 10:33:33.741487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.738 [2024-11-15 10:33:33.741502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:109720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.738 [2024-11-15 10:33:33.741516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.738 [2024-11-15 10:33:33.741532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:109728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.738 [2024-11-15 10:33:33.741546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.738 [2024-11-15 10:33:33.741562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:109736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.738 [2024-11-15 10:33:33.741576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.738 [2024-11-15 10:33:33.741592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:109744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.738 [2024-11-15 10:33:33.741613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.738 [2024-11-15 10:33:33.741629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:109752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.738 [2024-11-15 10:33:33.741643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.738 [2024-11-15 10:33:33.741668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:109760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.738 [2024-11-15 10:33:33.741689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.738 [2024-11-15 10:33:33.741710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:109768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.738 [2024-11-15 10:33:33.741724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.738 [2024-11-15 10:33:33.741740] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1052750 is same with the state(6) to be set 00:15:38.738 [2024-11-15 10:33:33.741757] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:38.738 [2024-11-15 10:33:33.741768] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:38.738 [2024-11-15 10:33:33.741779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109776 len:8 PRP1 0x0 PRP2 0x0 00:15:38.738 [2024-11-15 10:33:33.741793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.738 [2024-11-15 10:33:33.741808] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:38.738 [2024-11-15 10:33:33.741818] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:38.738 [2024-11-15 10:33:33.741829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110104 len:8 PRP1 0x0 PRP2 0x0 00:15:38.738 [2024-11-15 10:33:33.741843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.738 [2024-11-15 10:33:33.741856] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:38.738 [2024-11-15 10:33:33.741867] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:38.738 [2024-11-15 10:33:33.741877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110112 len:8 PRP1 0x0 PRP2 0x0 00:15:38.738 [2024-11-15 10:33:33.741891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.738 [2024-11-15 10:33:33.741905] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:38.738 [2024-11-15 10:33:33.741915] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:38.738 [2024-11-15 10:33:33.741926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110120 len:8 PRP1 0x0 PRP2 0x0 00:15:38.738 [2024-11-15 10:33:33.741939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.738 [2024-11-15 10:33:33.741953] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:38.738 [2024-11-15 10:33:33.741963] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:38.738 [2024-11-15 10:33:33.741973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110128 len:8 PRP1 0x0 PRP2 0x0 00:15:38.738 [2024-11-15 10:33:33.741986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.738 [2024-11-15 10:33:33.742000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:38.738 [2024-11-15 10:33:33.742010] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:38.738 [2024-11-15 10:33:33.742021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110136 len:8 PRP1 0x0 PRP2 0x0 00:15:38.738 [2024-11-15 10:33:33.742034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.738 [2024-11-15 10:33:33.742059] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:38.738 [2024-11-15 10:33:33.742080] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:38.738 [2024-11-15 10:33:33.742091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110144 len:8 PRP1 0x0 PRP2 0x0 00:15:38.738 [2024-11-15 10:33:33.742104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.738 [2024-11-15 10:33:33.742118] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:38.738 [2024-11-15 10:33:33.742134] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:38.738 [2024-11-15 10:33:33.742145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110152 len:8 PRP1 0x0 PRP2 0x0 00:15:38.738 [2024-11-15 10:33:33.742158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.738 [2024-11-15 10:33:33.742172] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:38.738 [2024-11-15 10:33:33.742182] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:38.738 [2024-11-15 10:33:33.742193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110160 len:8 PRP1 0x0 PRP2 0x0 00:15:38.738 [2024-11-15 10:33:33.742206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.738 [2024-11-15 10:33:33.742279] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:15:38.738 [2024-11-15 10:33:33.742337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:38.738 [2024-11-15 10:33:33.742359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.739 [2024-11-15 10:33:33.742374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:38.739 [2024-11-15 10:33:33.742388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.739 [2024-11-15 10:33:33.742403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:38.739 [2024-11-15 10:33:33.742416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.739 [2024-11-15 10:33:33.742431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:38.739 [2024-11-15 10:33:33.742452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.739 [2024-11-15 10:33:33.742467] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:15:38.739 [2024-11-15 10:33:33.746353] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:15:38.739 [2024-11-15 10:33:33.746397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa6710 (9): Bad file descriptor 00:15:38.739 [2024-11-15 10:33:33.769736] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:15:38.739 8065.20 IOPS, 31.50 MiB/s [2024-11-15T10:33:39.592Z] 8151.00 IOPS, 31.84 MiB/s [2024-11-15T10:33:39.592Z] 8221.92 IOPS, 32.12 MiB/s [2024-11-15T10:33:39.592Z] 8266.69 IOPS, 32.29 MiB/s [2024-11-15T10:33:39.592Z] 8313.86 IOPS, 32.48 MiB/s [2024-11-15T10:33:39.592Z] 8356.00 IOPS, 32.64 MiB/s 00:15:38.739 Latency(us) 00:15:38.739 [2024-11-15T10:33:39.592Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:38.739 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:38.739 Verification LBA range: start 0x0 length 0x4000 00:15:38.739 NVMe0n1 : 15.01 8356.07 32.64 208.19 0.00 14912.03 659.08 21805.61 00:15:38.739 [2024-11-15T10:33:39.592Z] =================================================================================================================== 00:15:38.739 [2024-11-15T10:33:39.592Z] Total : 8356.07 32.64 208.19 0.00 14912.03 659.08 21805.61 00:15:38.739 Received shutdown signal, test time was about 15.000000 seconds 00:15:38.739 00:15:38.739 Latency(us) 00:15:38.739 [2024-11-15T10:33:39.592Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:38.739 [2024-11-15T10:33:39.592Z] =================================================================================================================== 00:15:38.739 [2024-11-15T10:33:39.592Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:38.739 10:33:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:15:38.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:38.739 10:33:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:15:38.739 10:33:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:15:38.739 10:33:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=75586 00:15:38.739 10:33:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:15:38.739 10:33:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 75586 /var/tmp/bdevperf.sock 00:15:38.739 10:33:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 75586 ']' 00:15:38.739 10:33:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:38.739 10:33:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:38.739 10:33:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:38.739 10:33:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:38.739 10:33:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:38.998 10:33:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:38.998 10:33:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:15:38.998 10:33:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:15:39.259 [2024-11-15 10:33:40.067201] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:15:39.259 10:33:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:15:39.518 [2024-11-15 10:33:40.327393] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:15:39.518 10:33:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:40.084 NVMe0n1 00:15:40.084 10:33:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:40.343 00:15:40.343 10:33:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:40.601 00:15:40.601 10:33:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:40.601 10:33:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:15:40.860 10:33:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:41.118 10:33:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:15:44.402 10:33:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:44.402 10:33:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:15:44.402 10:33:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=75661 00:15:44.402 10:33:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:44.402 10:33:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 75661 00:15:45.778 { 00:15:45.778 "results": [ 00:15:45.778 { 00:15:45.778 "job": "NVMe0n1", 00:15:45.778 "core_mask": "0x1", 00:15:45.778 "workload": "verify", 00:15:45.778 "status": "finished", 00:15:45.778 "verify_range": { 00:15:45.779 "start": 0, 00:15:45.779 "length": 16384 00:15:45.779 }, 00:15:45.779 "queue_depth": 128, 00:15:45.779 "io_size": 4096, 00:15:45.779 "runtime": 1.009421, 00:15:45.779 "iops": 7941.186085884879, 00:15:45.779 "mibps": 31.020258147987807, 00:15:45.779 "io_failed": 0, 00:15:45.779 "io_timeout": 0, 00:15:45.779 "avg_latency_us": 16021.025352930501, 00:15:45.779 "min_latency_us": 1295.8254545454545, 00:15:45.779 "max_latency_us": 18230.923636363637 00:15:45.779 } 00:15:45.779 ], 00:15:45.779 "core_count": 1 00:15:45.779 } 00:15:45.779 10:33:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:45.779 [2024-11-15 10:33:39.482302] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:15:45.779 [2024-11-15 10:33:39.482427] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75586 ] 00:15:45.779 [2024-11-15 10:33:39.625968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:45.779 [2024-11-15 10:33:39.681015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:45.779 [2024-11-15 10:33:39.733699] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:45.779 [2024-11-15 10:33:41.808595] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:15:45.779 [2024-11-15 10:33:41.808719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:45.779 [2024-11-15 10:33:41.808745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.779 [2024-11-15 10:33:41.808779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:45.779 [2024-11-15 10:33:41.808793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.779 [2024-11-15 10:33:41.808807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:45.779 [2024-11-15 10:33:41.808819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.779 [2024-11-15 10:33:41.808833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:45.779 [2024-11-15 10:33:41.808846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.779 [2024-11-15 10:33:41.808860] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:15:45.779 [2024-11-15 10:33:41.808910] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:15:45.779 [2024-11-15 10:33:41.808941] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1189710 (9): Bad file descriptor 00:15:45.779 [2024-11-15 10:33:41.813667] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:15:45.779 Running I/O for 1 seconds... 00:15:45.779 7888.00 IOPS, 30.81 MiB/s 00:15:45.779 Latency(us) 00:15:45.779 [2024-11-15T10:33:46.632Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:45.779 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:45.779 Verification LBA range: start 0x0 length 0x4000 00:15:45.779 NVMe0n1 : 1.01 7941.19 31.02 0.00 0.00 16021.03 1295.83 18230.92 00:15:45.779 [2024-11-15T10:33:46.632Z] =================================================================================================================== 00:15:45.779 [2024-11-15T10:33:46.632Z] Total : 7941.19 31.02 0.00 0.00 16021.03 1295.83 18230.92 00:15:45.779 10:33:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:45.779 10:33:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:15:45.779 10:33:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:46.347 10:33:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:15:46.347 10:33:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:46.607 10:33:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:46.866 10:33:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:15:50.156 10:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:50.156 10:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:15:50.156 10:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 75586 00:15:50.156 10:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 75586 ']' 00:15:50.156 10:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 75586 00:15:50.156 10:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:15:50.156 10:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:50.156 10:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75586 00:15:50.156 killing process with pid 75586 00:15:50.156 10:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:50.156 10:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:50.156 10:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75586' 00:15:50.156 10:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 75586 00:15:50.156 10:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 75586 00:15:50.415 10:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:15:50.415 10:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:50.674 10:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:15:50.674 10:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:50.674 10:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:15:50.674 10:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:50.674 10:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:15:50.674 10:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:50.674 10:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:15:50.674 10:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:50.674 10:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:50.674 rmmod nvme_tcp 00:15:50.674 rmmod nvme_fabrics 00:15:50.674 rmmod nvme_keyring 00:15:50.674 10:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:50.674 10:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:15:50.674 10:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:15:50.674 10:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 75320 ']' 00:15:50.674 10:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 75320 00:15:50.674 10:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 75320 ']' 00:15:50.674 10:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 75320 00:15:50.674 10:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:15:50.674 10:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:50.674 10:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75320 00:15:50.674 killing process with pid 75320 00:15:50.674 10:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:15:50.674 10:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:15:50.674 10:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75320' 00:15:50.674 10:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 75320 00:15:50.674 10:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 75320 00:15:50.933 10:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:50.933 10:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:50.933 10:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:50.933 10:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:15:50.933 10:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:50.933 10:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:15:50.933 10:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:15:50.933 10:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:50.933 10:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:50.933 10:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:50.933 10:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:51.190 10:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:51.190 10:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:51.190 10:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:51.190 10:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:51.190 10:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:51.190 10:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:51.190 10:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:51.190 10:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:51.190 10:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:51.190 10:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:51.190 10:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:51.190 10:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:51.190 10:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:51.190 10:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:51.190 10:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:51.190 10:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:15:51.190 00:15:51.190 real 0m33.590s 00:15:51.190 user 2m9.267s 00:15:51.190 sys 0m6.072s 00:15:51.190 10:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:51.190 10:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:51.190 ************************************ 00:15:51.190 END TEST nvmf_failover 00:15:51.190 ************************************ 00:15:51.190 10:33:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:51.190 10:33:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:51.190 10:33:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:51.190 10:33:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:51.449 ************************************ 00:15:51.449 START TEST nvmf_host_discovery 00:15:51.449 ************************************ 00:15:51.449 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:51.449 * Looking for test storage... 00:15:51.449 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:51.449 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:51.449 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:15:51.449 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:51.449 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:51.449 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:51.449 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:51.450 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:51.450 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:15:51.450 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:15:51.450 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:15:51.450 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:15:51.450 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:15:51.450 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:15:51.450 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:15:51.450 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:51.450 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:15:51.450 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:15:51.450 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:51.450 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:51.450 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:15:51.450 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:15:51.450 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:51.450 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:15:51.450 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:15:51.450 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:15:51.450 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:15:51.450 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:51.450 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:15:51.450 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:15:51.450 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:51.450 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:51.450 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:15:51.450 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:51.450 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:51.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.450 --rc genhtml_branch_coverage=1 00:15:51.450 --rc genhtml_function_coverage=1 00:15:51.450 --rc genhtml_legend=1 00:15:51.450 --rc geninfo_all_blocks=1 00:15:51.450 --rc geninfo_unexecuted_blocks=1 00:15:51.450 00:15:51.450 ' 00:15:51.450 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:51.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.450 --rc genhtml_branch_coverage=1 00:15:51.450 --rc genhtml_function_coverage=1 00:15:51.450 --rc genhtml_legend=1 00:15:51.450 --rc geninfo_all_blocks=1 00:15:51.450 --rc geninfo_unexecuted_blocks=1 00:15:51.450 00:15:51.450 ' 00:15:51.450 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:51.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.450 --rc genhtml_branch_coverage=1 00:15:51.450 --rc genhtml_function_coverage=1 00:15:51.450 --rc genhtml_legend=1 00:15:51.450 --rc geninfo_all_blocks=1 00:15:51.450 --rc geninfo_unexecuted_blocks=1 00:15:51.450 00:15:51.450 ' 00:15:51.450 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:51.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.450 --rc genhtml_branch_coverage=1 00:15:51.450 --rc genhtml_function_coverage=1 00:15:51.450 --rc genhtml_legend=1 00:15:51.450 --rc geninfo_all_blocks=1 00:15:51.450 --rc geninfo_unexecuted_blocks=1 00:15:51.450 00:15:51.450 ' 00:15:51.450 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:51.450 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:15:51.450 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:51.450 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:51.450 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:51.450 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:51.450 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:51.450 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:51.450 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:51.450 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:51.450 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:51.450 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:51.450 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:15:51.450 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:15:51.450 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:51.450 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:51.450 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:51.450 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:51.450 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:51.450 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:15:51.450 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:51.450 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:51.450 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:51.450 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.450 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.450 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.450 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:15:51.450 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.450 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:15:51.451 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:51.451 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:51.451 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:51.451 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:51.451 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:51.451 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:51.451 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:51.451 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:51.451 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:51.451 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:51.451 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:15:51.451 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:15:51.451 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:15:51.451 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:15:51.451 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:15:51.451 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:15:51.451 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:15:51.451 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:51.451 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:51.451 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:51.451 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:51.451 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:51.451 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:51.451 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:51.451 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:51.451 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:51.451 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:51.451 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:51.451 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:51.451 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:51.451 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:51.451 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:51.451 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:51.451 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:51.451 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:51.451 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:51.451 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:51.451 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:51.451 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:51.451 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:51.451 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:51.451 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:51.451 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:51.451 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:51.451 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:51.451 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:51.451 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:51.451 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:51.451 Cannot find device "nvmf_init_br" 00:15:51.451 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:15:51.451 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:51.451 Cannot find device "nvmf_init_br2" 00:15:51.451 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:15:51.451 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:51.711 Cannot find device "nvmf_tgt_br" 00:15:51.711 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:15:51.711 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:51.711 Cannot find device "nvmf_tgt_br2" 00:15:51.711 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:15:51.711 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:51.711 Cannot find device "nvmf_init_br" 00:15:51.711 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:15:51.711 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:51.711 Cannot find device "nvmf_init_br2" 00:15:51.711 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:15:51.711 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:51.711 Cannot find device "nvmf_tgt_br" 00:15:51.711 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:15:51.711 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:51.711 Cannot find device "nvmf_tgt_br2" 00:15:51.711 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:15:51.711 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:51.711 Cannot find device "nvmf_br" 00:15:51.711 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:15:51.711 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:51.711 Cannot find device "nvmf_init_if" 00:15:51.711 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:15:51.711 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:51.711 Cannot find device "nvmf_init_if2" 00:15:51.711 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:15:51.711 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:51.711 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:51.711 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:15:51.711 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:51.711 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:51.711 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:15:51.711 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:51.711 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:51.711 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:51.711 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:51.711 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:51.711 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:51.711 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:51.711 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:51.711 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:51.711 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:51.711 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:51.711 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:51.711 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:51.711 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:51.711 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:51.970 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:51.970 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:51.970 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:51.970 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:51.970 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:51.970 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:51.970 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:51.970 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:51.970 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:51.970 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:51.970 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:51.970 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:51.970 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:51.970 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:51.970 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:51.970 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:51.970 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:51.971 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:51.971 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:51.971 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:15:51.971 00:15:51.971 --- 10.0.0.3 ping statistics --- 00:15:51.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.971 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:15:51.971 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:51.971 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:51.971 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:15:51.971 00:15:51.971 --- 10.0.0.4 ping statistics --- 00:15:51.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.971 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:15:51.971 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:51.971 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:51.971 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:15:51.971 00:15:51.971 --- 10.0.0.1 ping statistics --- 00:15:51.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.971 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:15:51.971 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:51.971 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:51.971 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:15:51.971 00:15:51.971 --- 10.0.0.2 ping statistics --- 00:15:51.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.971 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:15:51.971 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:51.971 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:15:51.971 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:51.971 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:51.971 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:51.971 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:51.971 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:51.971 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:51.971 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:51.971 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:15:51.971 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:51.971 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:51.971 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:51.971 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=75982 00:15:51.971 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 75982 00:15:51.971 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:51.971 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 75982 ']' 00:15:51.971 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:51.971 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:51.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:51.971 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:51.971 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:51.971 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:51.971 [2024-11-15 10:33:52.781084] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:15:51.971 [2024-11-15 10:33:52.781179] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:52.229 [2024-11-15 10:33:52.931849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.229 [2024-11-15 10:33:53.001640] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:52.229 [2024-11-15 10:33:53.001693] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:52.229 [2024-11-15 10:33:53.001707] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:52.229 [2024-11-15 10:33:53.001722] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:52.229 [2024-11-15 10:33:53.001731] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:52.229 [2024-11-15 10:33:53.002185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:52.229 [2024-11-15 10:33:53.058796] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:53.165 10:33:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:53.165 10:33:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:15:53.165 10:33:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:53.165 10:33:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:53.165 10:33:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:53.165 10:33:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:53.165 10:33:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:53.165 10:33:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.165 10:33:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:53.165 [2024-11-15 10:33:53.775729] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:53.165 10:33:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.165 10:33:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:15:53.165 10:33:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.165 10:33:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:53.165 [2024-11-15 10:33:53.783865] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:15:53.165 10:33:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.165 10:33:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:15:53.166 10:33:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.166 10:33:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:53.166 null0 00:15:53.166 10:33:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.166 10:33:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:15:53.166 10:33:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.166 10:33:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:53.166 null1 00:15:53.166 10:33:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.166 10:33:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:15:53.166 10:33:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.166 10:33:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:53.166 10:33:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.166 10:33:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=76014 00:15:53.166 10:33:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:15:53.166 10:33:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 76014 /tmp/host.sock 00:15:53.166 10:33:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 76014 ']' 00:15:53.166 10:33:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:15:53.166 10:33:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:53.166 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:15:53.166 10:33:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:15:53.166 10:33:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:53.166 10:33:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:53.166 [2024-11-15 10:33:53.865839] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:15:53.166 [2024-11-15 10:33:53.865933] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76014 ] 00:15:53.166 [2024-11-15 10:33:54.016130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.424 [2024-11-15 10:33:54.079856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.424 [2024-11-15 10:33:54.138437] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:53.424 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:53.424 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:15:53.424 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:53.424 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:15:53.424 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.424 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:53.424 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.424 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:15:53.424 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.424 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:53.424 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.424 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:15:53.424 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:15:53.424 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:53.424 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.424 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:53.424 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:53.424 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:53.424 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:53.424 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.683 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:15:53.683 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:15:53.683 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:53.683 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.683 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:53.683 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:53.683 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:53.683 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:53.683 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.683 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:15:53.683 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:15:53.683 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.683 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:53.683 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.683 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:15:53.683 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:53.683 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:53.683 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.683 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:53.683 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:53.683 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:53.683 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.683 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:15:53.683 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:15:53.683 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:53.683 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:53.683 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.683 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:53.683 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:53.683 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:53.684 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.684 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:15:53.684 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:15:53.684 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.684 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:53.684 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.684 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:15:53.684 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:53.684 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:53.684 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.684 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:53.684 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:53.684 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:53.684 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.684 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:15:53.684 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:15:53.684 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:53.684 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.684 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:53.684 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:53.684 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:53.684 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:53.684 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.943 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:15:53.943 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:15:53.943 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.943 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:53.943 [2024-11-15 10:33:54.572077] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:53.943 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.943 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:15:53.943 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:53.943 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.943 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:53.943 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:53.943 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:53.943 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:53.943 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.943 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:15:53.943 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:15:53.943 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:53.943 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.943 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:53.943 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:53.943 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:53.943 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:53.943 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.943 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:15:53.943 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:15:53.943 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:15:53.943 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:53.943 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:53.943 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:15:53.943 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:53.943 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:53.943 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:15:53.943 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:15:53.943 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.943 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:53.943 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:53.943 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.943 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:15:53.943 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:15:53.943 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:15:53.943 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:15:53.943 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:15:53.943 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.943 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:53.943 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.943 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:53.943 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:53.943 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:15:53.943 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:53.943 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:53.943 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:15:53.943 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:53.943 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:53.943 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.943 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:53.943 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:53.943 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:53.943 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.202 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == \n\v\m\e\0 ]] 00:15:54.202 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:15:54.460 [2024-11-15 10:33:55.228991] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:15:54.460 [2024-11-15 10:33:55.229045] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:15:54.460 [2024-11-15 10:33:55.229094] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:15:54.460 [2024-11-15 10:33:55.235058] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:15:54.460 [2024-11-15 10:33:55.289441] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:15:54.460 [2024-11-15 10:33:55.290506] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1972e50:1 started. 00:15:54.460 [2024-11-15 10:33:55.292461] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:15:54.460 [2024-11-15 10:33:55.292488] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:15:54.460 [2024-11-15 10:33:55.297776] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1972e50 was disconnected and freed. delete nvme_qpair. 00:15:55.026 10:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:55.026 10:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:55.026 10:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:15:55.026 10:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:55.026 10:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.026 10:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:55.026 10:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:55.026 10:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:55.026 10:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:55.026 10:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.026 10:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.026 10:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:15:55.026 10:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:15:55.026 10:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:15:55.026 10:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:15:55.027 10:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:55.027 10:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:15:55.027 10:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:15:55.027 10:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:55.027 10:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:55.027 10:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.027 10:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:55.027 10:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:55.027 10:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:55.286 10:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.286 10:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:15:55.286 10:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:15:55.286 10:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:15:55.286 10:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:15:55.286 10:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:15:55.286 10:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:55.286 10:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:15:55.286 10:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:15:55.286 10:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:55.286 10:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:55.286 10:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.286 10:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:15:55.286 10:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:55.286 10:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:15:55.286 10:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.286 10:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0 ]] 00:15:55.286 10:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:15:55.286 10:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:15:55.286 10:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:15:55.286 10:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:55.286 10:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:55.286 10:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:15:55.286 10:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:55.286 10:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:55.286 10:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:15:55.286 10:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:15:55.287 10:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.287 10:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:55.287 10:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:55.287 10:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.287 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:15:55.287 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:15:55.287 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:15:55.287 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:15:55.287 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:15:55.287 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.287 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:55.287 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.287 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:55.287 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:55.287 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:15:55.287 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:55.287 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:15:55.287 [2024-11-15 10:33:56.031380] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x194b640:1 started. 00:15:55.287 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:15:55.287 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:55.287 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:55.287 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:55.287 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.287 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:55.287 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:55.287 [2024-11-15 10:33:56.038948] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x194b640 was disconnected and freed. delete nvme_qpair. 00:15:55.287 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.287 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:55.287 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:15:55.287 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:15:55.287 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:15:55.287 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:55.287 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:55.287 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:15:55.287 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:55.287 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:55.287 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:15:55.287 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:55.287 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:15:55.287 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.287 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:55.287 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.287 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:15:55.287 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:15:55.287 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:15:55.287 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:15:55.287 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:15:55.287 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.287 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:55.547 [2024-11-15 10:33:56.141425] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:15:55.547 [2024-11-15 10:33:56.141658] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:15:55.547 [2024-11-15 10:33:56.141699] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:15:55.548 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.548 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:55.548 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:55.548 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:15:55.548 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:55.548 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:55.548 [2024-11-15 10:33:56.147655] bdev_nvme.c:7308:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:15:55.548 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:15:55.548 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:55.548 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.548 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:55.548 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:55.548 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:55.548 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:55.548 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.548 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.548 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:15:55.548 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:55.548 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:55.548 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:15:55.548 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:55.548 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:15:55.548 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:15:55.548 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:55.548 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:55.548 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.548 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:55.548 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:55.548 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:55.548 [2024-11-15 10:33:56.213493] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:15:55.548 [2024-11-15 10:33:56.213551] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:15:55.548 [2024-11-15 10:33:56.213563] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:15:55.548 [2024-11-15 10:33:56.213569] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:15:55.548 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.548 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:55.548 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:15:55.548 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:15:55.548 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:15:55.548 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:15:55.548 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:55.548 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:15:55.548 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:15:55.548 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:55.548 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:55.548 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.548 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:55.548 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:15:55.548 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:15:55.548 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.548 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:15:55.548 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:15:55.548 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:15:55.548 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:15:55.548 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:55.548 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:55.548 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:15:55.548 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:55.548 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:55.548 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:15:55.548 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:55.548 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:55.548 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.548 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:55.548 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.548 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:15:55.548 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:15:55.548 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:15:55.548 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:15:55.548 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:15:55.548 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.548 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:55.548 [2024-11-15 10:33:56.365869] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:15:55.548 [2024-11-15 10:33:56.365906] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:15:55.548 [2024-11-15 10:33:56.369633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:55.548 [2024-11-15 10:33:56.369673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.548 [2024-11-15 10:33:56.369688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:55.548 [2024-11-15 10:33:56.369698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.548 [2024-11-15 10:33:56.369708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:55.548 [2024-11-15 10:33:56.369718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.548 [2024-11-15 10:33:56.369728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:55.548 [2024-11-15 10:33:56.369738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.548 [2024-11-15 10:33:56.369748] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x194f230 is same with the state(6) to be set 00:15:55.549 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.549 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:55.549 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:55.549 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:15:55.549 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:55.549 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:55.549 [2024-11-15 10:33:56.371871] bdev_nvme.c:7171:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:15:55.549 [2024-11-15 10:33:56.371896] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:15:55.549 [2024-11-15 10:33:56.371950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x194f230 (9): Bad file descriptor 00:15:55.549 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:15:55.549 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:55.549 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:55.549 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.549 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:55.549 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:55.549 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:55.549 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.808 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.808 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:15:55.808 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:55.808 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:55.808 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:15:55.808 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:55.808 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:15:55.808 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:15:55.808 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:55.808 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.808 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:55.808 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:55.808 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:55.808 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:55.808 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.808 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:55.808 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:15:55.808 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:15:55.808 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:15:55.808 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:15:55.808 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:55.808 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:15:55.808 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:15:55.808 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:55.808 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.808 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:55.808 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:15:55.808 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:55.808 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:15:55.808 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.808 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4421 == \4\4\2\1 ]] 00:15:55.808 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:15:55.808 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:15:55.808 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:15:55.808 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:55.808 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:55.808 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:15:55.808 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:55.808 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:55.808 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:15:55.808 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:55.808 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:55.808 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.808 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:55.808 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.808 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:15:55.808 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:15:55.808 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:15:55.808 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:15:55.808 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:15:55.808 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.808 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:55.808 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.808 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:15:55.808 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:15:55.808 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:15:55.808 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:55.808 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:15:55.809 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:15:55.809 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:55.809 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:55.809 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.809 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:55.809 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:55.809 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:55.809 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.067 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:15:56.067 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:15:56.067 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:15:56.067 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:15:56.067 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:15:56.067 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:56.067 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:15:56.067 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:15:56.067 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:56.067 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.067 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.067 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:56.067 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:56.067 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:56.067 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.067 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:15:56.067 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:15:56.067 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:15:56.067 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:15:56.067 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:56.067 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:56.067 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:15:56.067 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:56.067 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:56.067 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:15:56.067 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:56.067 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.067 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:56.067 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.067 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.067 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:15:56.067 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:15:56.067 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:15:56.067 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:15:56.067 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:56.067 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.067 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:57.004 [2024-11-15 10:33:57.790710] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:15:57.004 [2024-11-15 10:33:57.790773] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:15:57.004 [2024-11-15 10:33:57.790792] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:15:57.004 [2024-11-15 10:33:57.796786] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:15:57.004 [2024-11-15 10:33:57.855156] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:15:57.005 [2024-11-15 10:33:57.855968] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x194ab40:1 started. 00:15:57.264 [2024-11-15 10:33:57.857732] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:15:57.264 [2024-11-15 10:33:57.857805] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:15:57.264 10:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.264 10:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:57.264 10:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:15:57.264 [2024-11-15 10:33:57.859675] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x194ab40 was disconnected and freed. delete nvme_qpair. 00:15:57.264 10:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:57.264 10:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:57.264 10:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:57.264 10:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:57.264 10:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:57.264 10:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:57.264 10:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.264 10:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:57.264 request: 00:15:57.264 { 00:15:57.264 "name": "nvme", 00:15:57.264 "trtype": "tcp", 00:15:57.264 "traddr": "10.0.0.3", 00:15:57.264 "adrfam": "ipv4", 00:15:57.264 "trsvcid": "8009", 00:15:57.264 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:57.264 "wait_for_attach": true, 00:15:57.264 "method": "bdev_nvme_start_discovery", 00:15:57.264 "req_id": 1 00:15:57.264 } 00:15:57.264 Got JSON-RPC error response 00:15:57.264 response: 00:15:57.264 { 00:15:57.264 "code": -17, 00:15:57.264 "message": "File exists" 00:15:57.264 } 00:15:57.264 10:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:57.264 10:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:15:57.264 10:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:57.264 10:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:57.264 10:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:57.264 10:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:15:57.264 10:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:57.264 10:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.264 10:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:57.264 10:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:57.264 10:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:15:57.264 10:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:15:57.264 10:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.264 10:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:15:57.264 10:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:15:57.264 10:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:57.264 10:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:57.264 10:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:57.264 10:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:57.264 10:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.264 10:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:57.264 10:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.264 10:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:57.264 10:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:57.264 10:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:15:57.264 10:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:57.264 10:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:57.264 10:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:57.264 10:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:57.264 10:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:57.264 10:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:57.264 10:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.264 10:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:57.264 request: 00:15:57.264 { 00:15:57.264 "name": "nvme_second", 00:15:57.264 "trtype": "tcp", 00:15:57.264 "traddr": "10.0.0.3", 00:15:57.264 "adrfam": "ipv4", 00:15:57.264 "trsvcid": "8009", 00:15:57.264 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:57.264 "wait_for_attach": true, 00:15:57.264 "method": "bdev_nvme_start_discovery", 00:15:57.264 "req_id": 1 00:15:57.264 } 00:15:57.264 Got JSON-RPC error response 00:15:57.264 response: 00:15:57.264 { 00:15:57.264 "code": -17, 00:15:57.264 "message": "File exists" 00:15:57.264 } 00:15:57.264 10:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:57.264 10:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:15:57.264 10:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:57.264 10:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:57.264 10:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:57.264 10:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:15:57.264 10:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:57.264 10:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.264 10:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:57.264 10:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:57.264 10:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:15:57.264 10:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:15:57.264 10:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.264 10:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:15:57.265 10:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:15:57.265 10:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:57.265 10:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:57.265 10:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.265 10:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:57.265 10:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:57.265 10:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:57.265 10:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.265 10:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:57.265 10:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:57.265 10:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:15:57.265 10:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:57.265 10:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:57.265 10:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:57.265 10:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:57.265 10:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:57.265 10:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:57.265 10:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.265 10:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:58.643 [2024-11-15 10:33:59.118165] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:15:58.643 [2024-11-15 10:33:59.118222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x193e110 with addr=10.0.0.3, port=8010 00:15:58.643 [2024-11-15 10:33:59.118247] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:15:58.643 [2024-11-15 10:33:59.118258] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:15:58.643 [2024-11-15 10:33:59.118267] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:15:59.578 [2024-11-15 10:34:00.118157] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:15:59.578 [2024-11-15 10:34:00.118249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x193e110 with addr=10.0.0.3, port=8010 00:15:59.578 [2024-11-15 10:34:00.118274] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:15:59.578 [2024-11-15 10:34:00.118285] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:15:59.578 [2024-11-15 10:34:00.118295] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:16:00.513 [2024-11-15 10:34:01.117997] bdev_nvme.c:7427:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:16:00.513 request: 00:16:00.513 { 00:16:00.513 "name": "nvme_second", 00:16:00.513 "trtype": "tcp", 00:16:00.513 "traddr": "10.0.0.3", 00:16:00.513 "adrfam": "ipv4", 00:16:00.513 "trsvcid": "8010", 00:16:00.513 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:00.513 "wait_for_attach": false, 00:16:00.513 "attach_timeout_ms": 3000, 00:16:00.513 "method": "bdev_nvme_start_discovery", 00:16:00.513 "req_id": 1 00:16:00.513 } 00:16:00.513 Got JSON-RPC error response 00:16:00.513 response: 00:16:00.513 { 00:16:00.513 "code": -110, 00:16:00.513 "message": "Connection timed out" 00:16:00.513 } 00:16:00.513 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:00.513 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:16:00.513 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:00.513 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:00.513 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:00.513 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:16:00.513 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:00.513 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.513 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.513 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:00.513 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:00.513 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:00.513 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.513 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:16:00.513 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:16:00.513 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 76014 00:16:00.513 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:16:00.513 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:00.513 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:16:00.513 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:00.513 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:16:00.513 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:00.514 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:00.514 rmmod nvme_tcp 00:16:00.514 rmmod nvme_fabrics 00:16:00.514 rmmod nvme_keyring 00:16:00.514 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:00.514 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:16:00.514 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:16:00.514 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 75982 ']' 00:16:00.514 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 75982 00:16:00.514 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@952 -- # '[' -z 75982 ']' 00:16:00.514 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # kill -0 75982 00:16:00.514 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # uname 00:16:00.514 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:00.514 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75982 00:16:00.514 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:16:00.514 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:16:00.514 killing process with pid 75982 00:16:00.514 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75982' 00:16:00.514 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@971 -- # kill 75982 00:16:00.514 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@976 -- # wait 75982 00:16:00.780 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:00.780 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:00.780 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:00.780 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:16:00.780 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:16:00.780 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:00.780 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:16:00.780 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:00.780 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:00.780 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:00.780 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:00.780 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:00.780 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:00.780 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:00.780 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:01.039 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:01.039 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:01.039 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:01.039 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:01.039 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:01.039 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:01.040 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:01.040 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:01.040 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:01.040 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:01.040 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:01.040 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:16:01.040 00:16:01.040 real 0m9.726s 00:16:01.040 user 0m17.924s 00:16:01.040 sys 0m1.992s 00:16:01.040 ************************************ 00:16:01.040 END TEST nvmf_host_discovery 00:16:01.040 ************************************ 00:16:01.040 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:01.040 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:01.040 10:34:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:16:01.040 10:34:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:01.040 10:34:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:01.040 10:34:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:01.040 ************************************ 00:16:01.040 START TEST nvmf_host_multipath_status 00:16:01.040 ************************************ 00:16:01.040 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:16:01.298 * Looking for test storage... 00:16:01.298 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:01.298 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:01.298 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:16:01.298 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:01.298 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:01.298 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:01.298 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:01.298 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:01.298 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:16:01.298 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:16:01.298 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:16:01.298 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:16:01.298 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:16:01.298 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:16:01.298 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:16:01.298 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:01.298 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:16:01.298 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:16:01.298 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:01.298 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:01.298 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:16:01.298 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:16:01.298 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:01.298 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:16:01.298 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:16:01.298 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:16:01.298 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:16:01.298 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:01.298 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:16:01.298 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:16:01.298 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:01.298 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:01.298 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:16:01.298 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:01.298 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:01.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:01.298 --rc genhtml_branch_coverage=1 00:16:01.298 --rc genhtml_function_coverage=1 00:16:01.298 --rc genhtml_legend=1 00:16:01.298 --rc geninfo_all_blocks=1 00:16:01.298 --rc geninfo_unexecuted_blocks=1 00:16:01.298 00:16:01.298 ' 00:16:01.298 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:01.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:01.298 --rc genhtml_branch_coverage=1 00:16:01.298 --rc genhtml_function_coverage=1 00:16:01.298 --rc genhtml_legend=1 00:16:01.298 --rc geninfo_all_blocks=1 00:16:01.298 --rc geninfo_unexecuted_blocks=1 00:16:01.298 00:16:01.298 ' 00:16:01.298 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:01.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:01.298 --rc genhtml_branch_coverage=1 00:16:01.298 --rc genhtml_function_coverage=1 00:16:01.298 --rc genhtml_legend=1 00:16:01.298 --rc geninfo_all_blocks=1 00:16:01.298 --rc geninfo_unexecuted_blocks=1 00:16:01.298 00:16:01.298 ' 00:16:01.298 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:01.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:01.298 --rc genhtml_branch_coverage=1 00:16:01.298 --rc genhtml_function_coverage=1 00:16:01.298 --rc genhtml_legend=1 00:16:01.298 --rc geninfo_all_blocks=1 00:16:01.298 --rc geninfo_unexecuted_blocks=1 00:16:01.298 00:16:01.298 ' 00:16:01.298 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:01.298 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:16:01.298 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:01.298 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:01.298 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:01.298 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:01.298 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:01.298 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:01.298 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:01.298 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:01.298 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:01.298 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:01.298 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:16:01.298 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:16:01.298 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:01.298 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:01.298 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:01.298 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:01.298 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:01.298 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:16:01.298 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:01.298 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:01.298 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:01.298 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.298 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.298 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.298 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:16:01.298 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.298 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:16:01.298 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:01.298 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:01.298 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:01.298 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:01.298 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:01.298 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:01.298 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:01.298 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:01.298 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:01.298 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:01.298 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:01.298 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:01.298 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:01.298 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:16:01.298 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:01.299 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:16:01.299 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:16:01.299 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:01.299 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:01.299 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:01.299 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:01.299 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:01.299 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:01.299 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:01.299 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:01.299 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:01.299 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:01.299 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:01.299 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:01.299 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:01.299 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:01.299 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:01.299 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:01.299 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:01.299 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:01.299 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:01.299 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:01.299 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:01.299 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:01.299 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:01.299 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:01.299 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:01.299 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:01.299 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:01.299 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:01.299 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:01.299 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:01.299 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:01.299 Cannot find device "nvmf_init_br" 00:16:01.299 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:16:01.299 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:01.299 Cannot find device "nvmf_init_br2" 00:16:01.299 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:16:01.299 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:01.299 Cannot find device "nvmf_tgt_br" 00:16:01.299 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:16:01.299 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:01.299 Cannot find device "nvmf_tgt_br2" 00:16:01.299 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:16:01.299 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:01.299 Cannot find device "nvmf_init_br" 00:16:01.299 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:16:01.299 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:01.299 Cannot find device "nvmf_init_br2" 00:16:01.299 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:16:01.299 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:01.299 Cannot find device "nvmf_tgt_br" 00:16:01.299 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:16:01.299 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:01.299 Cannot find device "nvmf_tgt_br2" 00:16:01.299 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:16:01.299 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:01.299 Cannot find device "nvmf_br" 00:16:01.299 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:16:01.299 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:01.557 Cannot find device "nvmf_init_if" 00:16:01.557 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:16:01.557 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:01.557 Cannot find device "nvmf_init_if2" 00:16:01.557 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:16:01.557 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:01.557 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:01.557 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:16:01.557 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:01.557 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:01.557 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:16:01.557 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:01.557 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:01.557 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:01.557 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:01.557 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:01.557 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:01.557 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:01.557 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:01.557 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:01.557 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:01.557 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:01.557 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:01.557 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:01.557 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:01.557 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:01.557 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:01.557 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:01.557 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:01.557 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:01.557 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:01.557 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:01.557 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:01.558 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:01.558 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:01.816 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:01.816 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:01.816 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:01.816 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:01.816 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:01.816 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:01.816 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:01.816 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:01.816 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:01.816 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:01.816 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:16:01.816 00:16:01.816 --- 10.0.0.3 ping statistics --- 00:16:01.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:01.816 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:16:01.816 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:01.817 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:01.817 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.088 ms 00:16:01.817 00:16:01.817 --- 10.0.0.4 ping statistics --- 00:16:01.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:01.817 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:16:01.817 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:01.817 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:01.817 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:16:01.817 00:16:01.817 --- 10.0.0.1 ping statistics --- 00:16:01.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:01.817 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:16:01.817 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:01.817 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:01.817 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:16:01.817 00:16:01.817 --- 10.0.0.2 ping statistics --- 00:16:01.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:01.817 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:16:01.817 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:01.817 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:16:01.817 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:01.817 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:01.817 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:01.817 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:01.817 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:01.817 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:01.817 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:01.817 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:16:01.817 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:01.817 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:01.817 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:01.817 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=76520 00:16:01.817 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:16:01.817 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 76520 00:16:01.817 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 76520 ']' 00:16:01.817 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:01.817 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:01.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:01.817 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:01.817 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:01.817 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:01.817 [2024-11-15 10:34:02.567943] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:16:01.817 [2024-11-15 10:34:02.568087] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:02.077 [2024-11-15 10:34:02.725243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:02.077 [2024-11-15 10:34:02.792166] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:02.077 [2024-11-15 10:34:02.792231] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:02.077 [2024-11-15 10:34:02.792256] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:02.077 [2024-11-15 10:34:02.792266] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:02.077 [2024-11-15 10:34:02.792276] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:02.077 [2024-11-15 10:34:02.793669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:02.077 [2024-11-15 10:34:02.793679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:02.077 [2024-11-15 10:34:02.854386] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:03.014 10:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:03.014 10:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:16:03.014 10:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:03.014 10:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:03.014 10:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:03.014 10:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:03.014 10:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=76520 00:16:03.014 10:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:03.273 [2024-11-15 10:34:03.886465] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:03.273 10:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:16:03.533 Malloc0 00:16:03.533 10:34:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:16:03.792 10:34:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:04.051 10:34:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:04.310 [2024-11-15 10:34:04.997099] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:04.310 10:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:16:04.570 [2024-11-15 10:34:05.293419] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:16:04.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:04.570 10:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:16:04.570 10:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=76576 00:16:04.570 10:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:04.570 10:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 76576 /var/tmp/bdevperf.sock 00:16:04.570 10:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 76576 ']' 00:16:04.570 10:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:04.570 10:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:04.570 10:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:04.570 10:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:04.570 10:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:05.139 10:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:05.139 10:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:16:05.139 10:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:16:05.139 10:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:16:05.712 Nvme0n1 00:16:05.712 10:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:16:05.971 Nvme0n1 00:16:05.971 10:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:16:05.971 10:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:16:08.503 10:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:16:08.503 10:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:16:08.503 10:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:08.503 10:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:16:09.902 10:34:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:16:09.902 10:34:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:09.902 10:34:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:09.902 10:34:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:09.902 10:34:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:09.902 10:34:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:09.902 10:34:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:09.902 10:34:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:10.161 10:34:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:10.161 10:34:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:10.161 10:34:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:10.161 10:34:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:10.420 10:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:10.420 10:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:10.420 10:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:10.420 10:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:10.679 10:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:10.679 10:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:10.679 10:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:10.679 10:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:11.246 10:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:11.246 10:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:11.246 10:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:11.246 10:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:11.505 10:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:11.505 10:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:16:11.505 10:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:11.764 10:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:12.023 10:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:16:12.968 10:34:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:16:12.968 10:34:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:12.968 10:34:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:12.968 10:34:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:13.226 10:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:13.226 10:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:13.226 10:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:13.226 10:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:13.794 10:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:13.794 10:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:13.794 10:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:13.794 10:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:14.106 10:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:14.106 10:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:14.106 10:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:14.106 10:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:14.365 10:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:14.365 10:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:14.365 10:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:14.365 10:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:14.625 10:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:14.625 10:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:14.625 10:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:14.625 10:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:14.885 10:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:14.885 10:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:16:14.885 10:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:15.145 10:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:16:15.714 10:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:16:16.649 10:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:16:16.649 10:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:16.649 10:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:16.649 10:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:16.908 10:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:16.908 10:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:16.908 10:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:16.908 10:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:17.167 10:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:17.167 10:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:17.167 10:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:17.167 10:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:17.426 10:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:17.426 10:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:17.426 10:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:17.426 10:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:17.993 10:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:17.993 10:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:17.993 10:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:17.993 10:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:17.993 10:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:17.993 10:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:17.993 10:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:17.993 10:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:18.580 10:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:18.580 10:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:16:18.580 10:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:18.839 10:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:16:19.097 10:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:16:20.034 10:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:16:20.034 10:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:20.034 10:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:20.034 10:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:20.292 10:34:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:20.292 10:34:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:20.292 10:34:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:20.292 10:34:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:20.551 10:34:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:20.551 10:34:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:20.551 10:34:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:20.551 10:34:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:20.810 10:34:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:20.810 10:34:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:20.810 10:34:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:20.810 10:34:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:21.069 10:34:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:21.069 10:34:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:21.069 10:34:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:21.069 10:34:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:21.635 10:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:21.635 10:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:21.635 10:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:21.635 10:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:21.894 10:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:21.894 10:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:16:21.894 10:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:16:22.153 10:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:16:22.451 10:34:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:16:23.389 10:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:16:23.389 10:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:23.389 10:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:23.389 10:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:23.648 10:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:23.648 10:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:23.648 10:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:23.648 10:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:24.214 10:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:24.214 10:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:24.214 10:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:24.214 10:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:24.214 10:34:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:24.214 10:34:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:24.214 10:34:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:24.214 10:34:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:24.782 10:34:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:24.782 10:34:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:16:24.782 10:34:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:24.782 10:34:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:25.041 10:34:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:25.041 10:34:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:25.041 10:34:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:25.041 10:34:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:25.299 10:34:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:25.299 10:34:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:16:25.299 10:34:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:16:25.559 10:34:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:25.869 10:34:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:16:27.246 10:34:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:16:27.246 10:34:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:27.246 10:34:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:27.246 10:34:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:27.246 10:34:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:27.246 10:34:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:27.246 10:34:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:27.246 10:34:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:27.504 10:34:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:27.504 10:34:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:27.504 10:34:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:27.504 10:34:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:28.073 10:34:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:28.073 10:34:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:28.073 10:34:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:28.073 10:34:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:28.332 10:34:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:28.332 10:34:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:16:28.332 10:34:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:28.332 10:34:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:28.590 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:28.590 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:28.590 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:28.590 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:28.849 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:28.849 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:16:29.107 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:16:29.107 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:16:29.365 10:34:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:29.625 10:34:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:16:30.560 10:34:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:16:30.560 10:34:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:30.560 10:34:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:30.560 10:34:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:31.129 10:34:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:31.129 10:34:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:31.129 10:34:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:31.129 10:34:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:31.389 10:34:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:31.389 10:34:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:31.389 10:34:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:31.389 10:34:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:31.647 10:34:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:31.647 10:34:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:31.647 10:34:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:31.647 10:34:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:31.906 10:34:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:31.906 10:34:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:31.906 10:34:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:31.906 10:34:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:32.165 10:34:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:32.165 10:34:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:32.165 10:34:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:32.165 10:34:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:32.424 10:34:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:32.424 10:34:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:16:32.424 10:34:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:32.683 10:34:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:33.251 10:34:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:16:34.233 10:34:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:16:34.233 10:34:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:34.233 10:34:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:34.233 10:34:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:34.233 10:34:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:34.233 10:34:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:34.233 10:34:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:34.233 10:34:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:34.800 10:34:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:34.800 10:34:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:34.800 10:34:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:34.800 10:34:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:35.059 10:34:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:35.060 10:34:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:35.060 10:34:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:35.060 10:34:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:35.318 10:34:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:35.318 10:34:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:35.318 10:34:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:35.318 10:34:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:35.577 10:34:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:35.577 10:34:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:35.578 10:34:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:35.578 10:34:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:35.836 10:34:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:35.836 10:34:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:16:35.836 10:34:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:36.094 10:34:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:16:36.353 10:34:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:16:37.289 10:34:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:16:37.289 10:34:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:37.289 10:34:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:37.289 10:34:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:37.547 10:34:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:37.547 10:34:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:37.547 10:34:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:37.547 10:34:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:38.115 10:34:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:38.115 10:34:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:38.115 10:34:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:38.115 10:34:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:38.374 10:34:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:38.374 10:34:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:38.374 10:34:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:38.374 10:34:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:38.633 10:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:38.633 10:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:38.633 10:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:38.633 10:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:38.892 10:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:38.892 10:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:38.892 10:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:38.892 10:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:39.150 10:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:39.150 10:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:16:39.151 10:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:39.410 10:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:16:39.669 10:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:16:40.606 10:34:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:16:40.606 10:34:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:40.606 10:34:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:40.606 10:34:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:40.865 10:34:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:40.865 10:34:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:40.865 10:34:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:40.865 10:34:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:41.433 10:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:41.433 10:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:41.433 10:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:41.433 10:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:41.434 10:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:41.434 10:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:41.434 10:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:41.434 10:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:42.001 10:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:42.001 10:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:42.001 10:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:42.001 10:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:42.259 10:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:42.259 10:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:42.259 10:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:42.259 10:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:42.516 10:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:42.517 10:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 76576 00:16:42.517 10:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 76576 ']' 00:16:42.517 10:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 76576 00:16:42.517 10:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:16:42.517 10:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:42.517 10:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76576 00:16:42.517 killing process with pid 76576 00:16:42.517 10:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:16:42.517 10:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:16:42.517 10:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76576' 00:16:42.517 10:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 76576 00:16:42.517 10:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 76576 00:16:42.517 { 00:16:42.517 "results": [ 00:16:42.517 { 00:16:42.517 "job": "Nvme0n1", 00:16:42.517 "core_mask": "0x4", 00:16:42.517 "workload": "verify", 00:16:42.517 "status": "terminated", 00:16:42.517 "verify_range": { 00:16:42.517 "start": 0, 00:16:42.517 "length": 16384 00:16:42.517 }, 00:16:42.517 "queue_depth": 128, 00:16:42.517 "io_size": 4096, 00:16:42.517 "runtime": 36.340271, 00:16:42.517 "iops": 8547.211989695948, 00:16:42.517 "mibps": 33.3875468347498, 00:16:42.517 "io_failed": 0, 00:16:42.517 "io_timeout": 0, 00:16:42.517 "avg_latency_us": 14943.27445657315, 00:16:42.517 "min_latency_us": 136.84363636363636, 00:16:42.517 "max_latency_us": 4026531.84 00:16:42.517 } 00:16:42.517 ], 00:16:42.517 "core_count": 1 00:16:42.517 } 00:16:42.777 10:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 76576 00:16:42.777 10:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:42.777 [2024-11-15 10:34:05.360565] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:16:42.777 [2024-11-15 10:34:05.360694] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76576 ] 00:16:42.777 [2024-11-15 10:34:05.505336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:42.777 [2024-11-15 10:34:05.568015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:42.777 [2024-11-15 10:34:05.625812] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:42.777 Running I/O for 90 seconds... 00:16:42.777 6804.00 IOPS, 26.58 MiB/s [2024-11-15T10:34:43.630Z] 6794.50 IOPS, 26.54 MiB/s [2024-11-15T10:34:43.630Z] 7312.67 IOPS, 28.57 MiB/s [2024-11-15T10:34:43.630Z] 7796.00 IOPS, 30.45 MiB/s [2024-11-15T10:34:43.630Z] 8084.80 IOPS, 31.58 MiB/s [2024-11-15T10:34:43.630Z] 8270.67 IOPS, 32.31 MiB/s [2024-11-15T10:34:43.630Z] 8389.71 IOPS, 32.77 MiB/s [2024-11-15T10:34:43.630Z] 8478.88 IOPS, 33.12 MiB/s [2024-11-15T10:34:43.630Z] 8561.56 IOPS, 33.44 MiB/s [2024-11-15T10:34:43.630Z] 8653.30 IOPS, 33.80 MiB/s [2024-11-15T10:34:43.630Z] 8714.64 IOPS, 34.04 MiB/s [2024-11-15T10:34:43.630Z] 8765.17 IOPS, 34.24 MiB/s [2024-11-15T10:34:43.630Z] 8785.54 IOPS, 34.32 MiB/s [2024-11-15T10:34:43.630Z] 8724.14 IOPS, 34.08 MiB/s [2024-11-15T10:34:43.630Z] 8696.00 IOPS, 33.97 MiB/s [2024-11-15T10:34:43.630Z] [2024-11-15 10:34:22.808831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:63200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.778 [2024-11-15 10:34:22.808921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:42.778 [2024-11-15 10:34:22.808982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:63208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.778 [2024-11-15 10:34:22.809006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:42.778 [2024-11-15 10:34:22.809030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:63216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.778 [2024-11-15 10:34:22.809046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:42.778 [2024-11-15 10:34:22.809086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:63224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.778 [2024-11-15 10:34:22.809103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:42.778 [2024-11-15 10:34:22.809125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:63232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.778 [2024-11-15 10:34:22.809140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:42.778 [2024-11-15 10:34:22.809163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:63240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.778 [2024-11-15 10:34:22.809179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:42.778 [2024-11-15 10:34:22.809201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:63248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.778 [2024-11-15 10:34:22.809216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:42.778 [2024-11-15 10:34:22.809238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:63256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.778 [2024-11-15 10:34:22.809254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:42.778 [2024-11-15 10:34:22.809276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:62752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.778 [2024-11-15 10:34:22.809291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:42.778 [2024-11-15 10:34:22.809345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:62760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.778 [2024-11-15 10:34:22.809363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:42.778 [2024-11-15 10:34:22.809385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:62768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.778 [2024-11-15 10:34:22.809401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:42.778 [2024-11-15 10:34:22.809422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:62776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.778 [2024-11-15 10:34:22.809438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:42.778 [2024-11-15 10:34:22.809459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:62784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.778 [2024-11-15 10:34:22.809474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:42.778 [2024-11-15 10:34:22.809496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:62792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.778 [2024-11-15 10:34:22.809513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:42.778 [2024-11-15 10:34:22.809535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:62800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.778 [2024-11-15 10:34:22.809550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:42.778 [2024-11-15 10:34:22.809572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:62808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.778 [2024-11-15 10:34:22.809588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:42.778 [2024-11-15 10:34:22.809614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:63264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.778 [2024-11-15 10:34:22.809631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:42.778 [2024-11-15 10:34:22.809654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:63272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.778 [2024-11-15 10:34:22.809670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:42.778 [2024-11-15 10:34:22.809692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:63280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.778 [2024-11-15 10:34:22.809707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:42.778 [2024-11-15 10:34:22.809729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:63288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.778 [2024-11-15 10:34:22.809754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:42.778 [2024-11-15 10:34:22.809776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:63296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.778 [2024-11-15 10:34:22.809792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:42.778 [2024-11-15 10:34:22.809826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:63304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.778 [2024-11-15 10:34:22.809844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:42.778 [2024-11-15 10:34:22.809867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:63312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.778 [2024-11-15 10:34:22.809895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:42.778 [2024-11-15 10:34:22.809927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:63320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.778 [2024-11-15 10:34:22.809943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:42.778 [2024-11-15 10:34:22.809966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:63328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.778 [2024-11-15 10:34:22.809982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:42.778 [2024-11-15 10:34:22.810004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:63336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.778 [2024-11-15 10:34:22.810020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:42.778 [2024-11-15 10:34:22.810041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:63344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.778 [2024-11-15 10:34:22.810072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:42.778 [2024-11-15 10:34:22.810097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:63352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.778 [2024-11-15 10:34:22.810119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:42.778 [2024-11-15 10:34:22.810142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:63360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.778 [2024-11-15 10:34:22.810157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:42.778 [2024-11-15 10:34:22.810180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:63368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.778 [2024-11-15 10:34:22.810196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:42.778 [2024-11-15 10:34:22.810218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:63376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.778 [2024-11-15 10:34:22.810235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:42.778 [2024-11-15 10:34:22.810257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:63384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.778 [2024-11-15 10:34:22.810273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:42.778 [2024-11-15 10:34:22.810295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:63392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.778 [2024-11-15 10:34:22.810311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:42.779 [2024-11-15 10:34:22.810343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:63400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.779 [2024-11-15 10:34:22.810360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:42.779 [2024-11-15 10:34:22.810383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:63408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.779 [2024-11-15 10:34:22.810399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:42.779 [2024-11-15 10:34:22.810421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:63416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.779 [2024-11-15 10:34:22.810437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:42.779 [2024-11-15 10:34:22.810459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:63424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.779 [2024-11-15 10:34:22.810475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:42.779 [2024-11-15 10:34:22.810497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:63432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.779 [2024-11-15 10:34:22.810513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:42.779 [2024-11-15 10:34:22.810535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:63440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.779 [2024-11-15 10:34:22.810551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:42.779 [2024-11-15 10:34:22.810572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:63448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.779 [2024-11-15 10:34:22.810588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:42.779 [2024-11-15 10:34:22.810610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:62816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.779 [2024-11-15 10:34:22.810635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:42.779 [2024-11-15 10:34:22.810656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:62824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.779 [2024-11-15 10:34:22.810673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:42.779 [2024-11-15 10:34:22.810695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:62832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.779 [2024-11-15 10:34:22.810711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:42.779 [2024-11-15 10:34:22.810733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:62840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.779 [2024-11-15 10:34:22.810756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:42.779 [2024-11-15 10:34:22.810778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:62848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.779 [2024-11-15 10:34:22.810794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:42.779 [2024-11-15 10:34:22.810816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:62856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.779 [2024-11-15 10:34:22.810840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:42.779 [2024-11-15 10:34:22.810864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:62864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.779 [2024-11-15 10:34:22.810880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:42.779 [2024-11-15 10:34:22.810902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:62872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.779 [2024-11-15 10:34:22.810918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:42.779 [2024-11-15 10:34:22.810940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:62880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.779 [2024-11-15 10:34:22.810956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:42.779 [2024-11-15 10:34:22.810978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:62888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.779 [2024-11-15 10:34:22.810993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:42.779 [2024-11-15 10:34:22.811016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:62896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.779 [2024-11-15 10:34:22.811032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:42.779 [2024-11-15 10:34:22.811066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:62904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.779 [2024-11-15 10:34:22.811085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:42.779 [2024-11-15 10:34:22.811108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:62912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.779 [2024-11-15 10:34:22.811125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:42.779 [2024-11-15 10:34:22.811147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:62920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.779 [2024-11-15 10:34:22.811168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:42.779 [2024-11-15 10:34:22.811190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:62928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.779 [2024-11-15 10:34:22.811206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:42.779 [2024-11-15 10:34:22.811228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:62936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.779 [2024-11-15 10:34:22.811244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:42.779 [2024-11-15 10:34:22.811270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:63456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.779 [2024-11-15 10:34:22.811287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:42.779 [2024-11-15 10:34:22.811309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:63464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.779 [2024-11-15 10:34:22.811334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:42.779 [2024-11-15 10:34:22.811358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:63472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.779 [2024-11-15 10:34:22.811375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:42.779 [2024-11-15 10:34:22.811397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:63480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.779 [2024-11-15 10:34:22.811413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:42.779 [2024-11-15 10:34:22.811447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:63488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.779 [2024-11-15 10:34:22.811466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:42.779 [2024-11-15 10:34:22.811489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:63496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.779 [2024-11-15 10:34:22.811506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:42.779 [2024-11-15 10:34:22.811528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:63504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.779 [2024-11-15 10:34:22.811544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.779 [2024-11-15 10:34:22.811567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:63512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.779 [2024-11-15 10:34:22.811582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.779 [2024-11-15 10:34:22.811604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:63520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.779 [2024-11-15 10:34:22.811620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:42.779 [2024-11-15 10:34:22.811642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:63528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.780 [2024-11-15 10:34:22.811658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:42.780 [2024-11-15 10:34:22.811680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:63536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.780 [2024-11-15 10:34:22.811696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:42.780 [2024-11-15 10:34:22.811718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:63544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.780 [2024-11-15 10:34:22.811734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:42.780 [2024-11-15 10:34:22.811756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:62944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.780 [2024-11-15 10:34:22.811772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:42.780 [2024-11-15 10:34:22.811795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:62952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.780 [2024-11-15 10:34:22.811811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:42.780 [2024-11-15 10:34:22.811842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:62960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.780 [2024-11-15 10:34:22.811859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:42.780 [2024-11-15 10:34:22.811882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:62968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.780 [2024-11-15 10:34:22.811898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:42.780 [2024-11-15 10:34:22.811920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:62976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.780 [2024-11-15 10:34:22.811935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:42.780 [2024-11-15 10:34:22.811957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:62984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.780 [2024-11-15 10:34:22.811973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:42.780 [2024-11-15 10:34:22.811995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:62992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.780 [2024-11-15 10:34:22.812011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:42.780 [2024-11-15 10:34:22.812033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:63000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.780 [2024-11-15 10:34:22.812062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:42.780 [2024-11-15 10:34:22.812089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:63008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.780 [2024-11-15 10:34:22.812106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:42.780 [2024-11-15 10:34:22.812128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:63016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.780 [2024-11-15 10:34:22.812145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:42.780 [2024-11-15 10:34:22.812167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:63024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.780 [2024-11-15 10:34:22.812183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:42.780 [2024-11-15 10:34:22.812205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:63032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.780 [2024-11-15 10:34:22.812229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:42.780 [2024-11-15 10:34:22.812251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:63040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.780 [2024-11-15 10:34:22.812267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:42.780 [2024-11-15 10:34:22.812289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:63048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.780 [2024-11-15 10:34:22.812305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:42.780 [2024-11-15 10:34:22.812336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:63056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.780 [2024-11-15 10:34:22.812353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:42.780 [2024-11-15 10:34:22.812375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:63064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.780 [2024-11-15 10:34:22.812391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:42.780 [2024-11-15 10:34:22.812419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:63552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.780 [2024-11-15 10:34:22.812435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:42.780 [2024-11-15 10:34:22.812457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:63560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.780 [2024-11-15 10:34:22.812472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:42.780 [2024-11-15 10:34:22.812494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:63568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.780 [2024-11-15 10:34:22.812510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:42.780 [2024-11-15 10:34:22.812531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:63576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.780 [2024-11-15 10:34:22.812547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:42.780 [2024-11-15 10:34:22.812574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:63584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.780 [2024-11-15 10:34:22.812590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:42.780 [2024-11-15 10:34:22.812612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:63592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.780 [2024-11-15 10:34:22.812628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:42.780 [2024-11-15 10:34:22.812650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:63600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.780 [2024-11-15 10:34:22.812666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:42.780 [2024-11-15 10:34:22.812689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:63608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.780 [2024-11-15 10:34:22.812705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:42.780 [2024-11-15 10:34:22.812726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:63616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.780 [2024-11-15 10:34:22.812742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:42.780 [2024-11-15 10:34:22.812764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:63624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.780 [2024-11-15 10:34:22.812780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:42.780 [2024-11-15 10:34:22.812802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:63632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.780 [2024-11-15 10:34:22.812827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:42.780 [2024-11-15 10:34:22.812850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:63640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.780 [2024-11-15 10:34:22.812866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:42.780 [2024-11-15 10:34:22.812888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:63648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.780 [2024-11-15 10:34:22.812904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:42.780 [2024-11-15 10:34:22.812926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:63656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.781 [2024-11-15 10:34:22.812942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:42.781 [2024-11-15 10:34:22.812968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:63664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.781 [2024-11-15 10:34:22.812984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:42.781 [2024-11-15 10:34:22.813006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:63672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.781 [2024-11-15 10:34:22.813022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:42.781 [2024-11-15 10:34:22.813065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:63680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.781 [2024-11-15 10:34:22.813085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:42.781 [2024-11-15 10:34:22.813108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.781 [2024-11-15 10:34:22.813124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:42.781 [2024-11-15 10:34:22.813147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:63072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.781 [2024-11-15 10:34:22.813163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:42.781 [2024-11-15 10:34:22.813184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:63080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.781 [2024-11-15 10:34:22.813200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:42.781 [2024-11-15 10:34:22.813222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:63088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.781 [2024-11-15 10:34:22.813238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:42.781 [2024-11-15 10:34:22.813260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:63096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.781 [2024-11-15 10:34:22.813282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:42.781 [2024-11-15 10:34:22.813305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:63104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.781 [2024-11-15 10:34:22.813329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:42.781 [2024-11-15 10:34:22.813353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:63112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.781 [2024-11-15 10:34:22.813369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:42.781 [2024-11-15 10:34:22.813391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:63120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.781 [2024-11-15 10:34:22.813407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:42.781 [2024-11-15 10:34:22.813429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:63128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.781 [2024-11-15 10:34:22.813445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:42.781 [2024-11-15 10:34:22.813467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:63136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.781 [2024-11-15 10:34:22.813482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:42.781 [2024-11-15 10:34:22.813504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:63144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.781 [2024-11-15 10:34:22.813525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:42.781 [2024-11-15 10:34:22.813546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:63152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.781 [2024-11-15 10:34:22.813562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:42.781 [2024-11-15 10:34:22.813584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:63160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.781 [2024-11-15 10:34:22.813599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:42.781 [2024-11-15 10:34:22.813628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:63168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.781 [2024-11-15 10:34:22.813643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:42.781 [2024-11-15 10:34:22.813665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:63176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.781 [2024-11-15 10:34:22.813681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:42.781 [2024-11-15 10:34:22.813708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:63184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.781 [2024-11-15 10:34:22.813725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:42.781 [2024-11-15 10:34:22.814447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:63192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.781 [2024-11-15 10:34:22.814487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:42.781 [2024-11-15 10:34:22.814522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:63696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.781 [2024-11-15 10:34:22.814541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:42.781 [2024-11-15 10:34:22.814588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:63704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.781 [2024-11-15 10:34:22.814607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:42.781 [2024-11-15 10:34:22.814636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:63712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.781 [2024-11-15 10:34:22.814652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:42.781 [2024-11-15 10:34:22.814682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:63720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.781 [2024-11-15 10:34:22.814703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:42.781 [2024-11-15 10:34:22.814733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:63728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.781 [2024-11-15 10:34:22.814749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:42.781 [2024-11-15 10:34:22.814782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:63736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.781 [2024-11-15 10:34:22.814798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:42.781 [2024-11-15 10:34:22.814827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:63744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.781 [2024-11-15 10:34:22.814843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:42.781 [2024-11-15 10:34:22.814888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:63752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.781 [2024-11-15 10:34:22.814909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:42.781 [2024-11-15 10:34:22.814940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:63760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.781 [2024-11-15 10:34:22.814957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:42.781 [2024-11-15 10:34:22.814987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:63768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.781 [2024-11-15 10:34:22.815003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:42.781 8682.25 IOPS, 33.92 MiB/s [2024-11-15T10:34:43.634Z] 8171.53 IOPS, 31.92 MiB/s [2024-11-15T10:34:43.634Z] 7717.56 IOPS, 30.15 MiB/s [2024-11-15T10:34:43.634Z] 7311.37 IOPS, 28.56 MiB/s [2024-11-15T10:34:43.634Z] 6985.45 IOPS, 27.29 MiB/s [2024-11-15T10:34:43.634Z] 7066.14 IOPS, 27.60 MiB/s [2024-11-15T10:34:43.634Z] 7165.68 IOPS, 27.99 MiB/s [2024-11-15T10:34:43.634Z] 7260.00 IOPS, 28.36 MiB/s [2024-11-15T10:34:43.634Z] 7448.42 IOPS, 29.10 MiB/s [2024-11-15T10:34:43.634Z] 7652.48 IOPS, 29.89 MiB/s [2024-11-15T10:34:43.634Z] 7796.92 IOPS, 30.46 MiB/s [2024-11-15T10:34:43.634Z] 7916.07 IOPS, 30.92 MiB/s [2024-11-15T10:34:43.634Z] 7958.79 IOPS, 31.09 MiB/s [2024-11-15T10:34:43.634Z] 7999.66 IOPS, 31.25 MiB/s [2024-11-15T10:34:43.635Z] 8036.47 IOPS, 31.39 MiB/s [2024-11-15T10:34:43.635Z] 8158.39 IOPS, 31.87 MiB/s [2024-11-15T10:34:43.635Z] 8291.44 IOPS, 32.39 MiB/s [2024-11-15T10:34:43.635Z] 8416.36 IOPS, 32.88 MiB/s [2024-11-15T10:34:43.635Z] [2024-11-15 10:34:40.372646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:39720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.782 [2024-11-15 10:34:40.372727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:42.782 [2024-11-15 10:34:40.372788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.782 [2024-11-15 10:34:40.372841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:42.782 [2024-11-15 10:34:40.372868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:39304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.782 [2024-11-15 10:34:40.372884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:42.782 [2024-11-15 10:34:40.372907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:39336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.782 [2024-11-15 10:34:40.372922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:42.782 [2024-11-15 10:34:40.372944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:39368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.782 [2024-11-15 10:34:40.372959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:42.782 [2024-11-15 10:34:40.372981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:39400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.782 [2024-11-15 10:34:40.372996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:42.782 [2024-11-15 10:34:40.373018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.782 [2024-11-15 10:34:40.373033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:42.782 [2024-11-15 10:34:40.373071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:39760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.782 [2024-11-15 10:34:40.373090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:42.782 [2024-11-15 10:34:40.373113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.782 [2024-11-15 10:34:40.373128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:42.782 [2024-11-15 10:34:40.373150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:39792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.782 [2024-11-15 10:34:40.373166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:42.782 [2024-11-15 10:34:40.373187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:39808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.782 [2024-11-15 10:34:40.373202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:42.782 [2024-11-15 10:34:40.373224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:39824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.782 [2024-11-15 10:34:40.373239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:42.782 [2024-11-15 10:34:40.373260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:39840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.782 [2024-11-15 10:34:40.373275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:42.782 [2024-11-15 10:34:40.373297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:39432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.782 [2024-11-15 10:34:40.373324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:42.782 [2024-11-15 10:34:40.373347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:39464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.782 [2024-11-15 10:34:40.373363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:42.782 [2024-11-15 10:34:40.373385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:39360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.782 [2024-11-15 10:34:40.373400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:42.782 [2024-11-15 10:34:40.373422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:39392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.782 [2024-11-15 10:34:40.373438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:42.782 [2024-11-15 10:34:40.373463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.782 [2024-11-15 10:34:40.373479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:42.782 [2024-11-15 10:34:40.373501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:39456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.782 [2024-11-15 10:34:40.373517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:42.782 [2024-11-15 10:34:40.373538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:39488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.782 [2024-11-15 10:34:40.373554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:42.782 [2024-11-15 10:34:40.373575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:39496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.782 [2024-11-15 10:34:40.373590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:42.782 [2024-11-15 10:34:40.373617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:39520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.782 [2024-11-15 10:34:40.373633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:42.782 [2024-11-15 10:34:40.373655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:39856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.782 [2024-11-15 10:34:40.373670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:42.782 [2024-11-15 10:34:40.373692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:39872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.782 [2024-11-15 10:34:40.373707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:42.782 [2024-11-15 10:34:40.373728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.782 [2024-11-15 10:34:40.373744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:42.782 [2024-11-15 10:34:40.373766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:39904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.782 [2024-11-15 10:34:40.373781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:42.782 [2024-11-15 10:34:40.373812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:39568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.782 [2024-11-15 10:34:40.373829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:42.782 [2024-11-15 10:34:40.373851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:39600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.782 [2024-11-15 10:34:40.373866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:42.782 [2024-11-15 10:34:40.373887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:39544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.782 [2024-11-15 10:34:40.373903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:42.782 [2024-11-15 10:34:40.373924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:39616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.782 [2024-11-15 10:34:40.373940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:42.782 [2024-11-15 10:34:40.373962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:39648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.782 [2024-11-15 10:34:40.373979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:42.782 [2024-11-15 10:34:40.374000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:39912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.783 [2024-11-15 10:34:40.374016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:42.783 [2024-11-15 10:34:40.374038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:39928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.783 [2024-11-15 10:34:40.374068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:42.783 [2024-11-15 10:34:40.374093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:39944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.783 [2024-11-15 10:34:40.374109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:42.783 [2024-11-15 10:34:40.374132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:39960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.783 [2024-11-15 10:34:40.374147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:42.783 [2024-11-15 10:34:40.374169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:39976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.783 [2024-11-15 10:34:40.374184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:42.783 [2024-11-15 10:34:40.374206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:39992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.783 [2024-11-15 10:34:40.374221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:42.783 [2024-11-15 10:34:40.374243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:40008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.783 [2024-11-15 10:34:40.374259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:42.783 [2024-11-15 10:34:40.374290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:40024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.783 [2024-11-15 10:34:40.374306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:42.783 [2024-11-15 10:34:40.374328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.783 [2024-11-15 10:34:40.374344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:42.783 [2024-11-15 10:34:40.374366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:40056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.783 [2024-11-15 10:34:40.374381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:42.783 [2024-11-15 10:34:40.374403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:39576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.783 [2024-11-15 10:34:40.374418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:42.783 [2024-11-15 10:34:40.376914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:39680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.783 [2024-11-15 10:34:40.376949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:42.783 [2024-11-15 10:34:40.376980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:39712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.783 [2024-11-15 10:34:40.376998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:42.783 [2024-11-15 10:34:40.377021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:40080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.783 [2024-11-15 10:34:40.377037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:42.783 [2024-11-15 10:34:40.377077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:40096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.783 [2024-11-15 10:34:40.377097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:42.783 [2024-11-15 10:34:40.377121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:40112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.783 [2024-11-15 10:34:40.377137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:42.783 [2024-11-15 10:34:40.377159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:40128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.783 [2024-11-15 10:34:40.377174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:42.783 [2024-11-15 10:34:40.377196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:40144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.783 [2024-11-15 10:34:40.377211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:42.783 [2024-11-15 10:34:40.377233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:40160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.783 [2024-11-15 10:34:40.377248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:42.783 [2024-11-15 10:34:40.377285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:40176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.783 [2024-11-15 10:34:40.377303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:42.783 [2024-11-15 10:34:40.377325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:40192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.783 [2024-11-15 10:34:40.377340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:42.783 [2024-11-15 10:34:40.377362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:39728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.783 [2024-11-15 10:34:40.377377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:42.783 [2024-11-15 10:34:40.377399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:39768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.783 [2024-11-15 10:34:40.377414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:42.783 [2024-11-15 10:34:40.377435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:39800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.783 [2024-11-15 10:34:40.377450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:42.783 [2024-11-15 10:34:40.377472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:39832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.783 [2024-11-15 10:34:40.377487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:42.783 [2024-11-15 10:34:40.377509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:39608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.783 [2024-11-15 10:34:40.377524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:42.783 [2024-11-15 10:34:40.377546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:39640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.783 [2024-11-15 10:34:40.377561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:42.783 [2024-11-15 10:34:40.377583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:39672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.783 [2024-11-15 10:34:40.377598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:42.783 [2024-11-15 10:34:40.377620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.784 [2024-11-15 10:34:40.377635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:42.784 [2024-11-15 10:34:40.377657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:40216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.784 [2024-11-15 10:34:40.377672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:42.784 [2024-11-15 10:34:40.377694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:40232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.784 [2024-11-15 10:34:40.377710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:42.784 [2024-11-15 10:34:40.377732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:40248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.784 [2024-11-15 10:34:40.377756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:42.784 [2024-11-15 10:34:40.377781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.784 [2024-11-15 10:34:40.377797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:42.784 [2024-11-15 10:34:40.377819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.784 [2024-11-15 10:34:40.377834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:42.784 [2024-11-15 10:34:40.377856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:40296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.784 [2024-11-15 10:34:40.377871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:42.784 [2024-11-15 10:34:40.377893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:40312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.784 [2024-11-15 10:34:40.377909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:42.784 [2024-11-15 10:34:40.377931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:40328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.784 [2024-11-15 10:34:40.377946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:42.784 8486.35 IOPS, 33.15 MiB/s [2024-11-15T10:34:43.637Z] 8514.51 IOPS, 33.26 MiB/s [2024-11-15T10:34:43.637Z] 8541.33 IOPS, 33.36 MiB/s [2024-11-15T10:34:43.637Z] Received shutdown signal, test time was about 36.341083 seconds 00:16:42.784 00:16:42.784 Latency(us) 00:16:42.784 [2024-11-15T10:34:43.637Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:42.784 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:42.784 Verification LBA range: start 0x0 length 0x4000 00:16:42.784 Nvme0n1 : 36.34 8547.21 33.39 0.00 0.00 14943.27 136.84 4026531.84 00:16:42.784 [2024-11-15T10:34:43.637Z] =================================================================================================================== 00:16:42.784 [2024-11-15T10:34:43.637Z] Total : 8547.21 33.39 0.00 0.00 14943.27 136.84 4026531.84 00:16:42.784 10:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:43.042 10:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:16:43.042 10:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:43.042 10:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:16:43.042 10:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:43.042 10:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:16:43.042 10:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:43.042 10:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:16:43.042 10:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:43.042 10:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:43.042 rmmod nvme_tcp 00:16:43.042 rmmod nvme_fabrics 00:16:43.042 rmmod nvme_keyring 00:16:43.042 10:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:43.042 10:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:16:43.042 10:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:16:43.042 10:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 76520 ']' 00:16:43.042 10:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 76520 00:16:43.042 10:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 76520 ']' 00:16:43.042 10:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 76520 00:16:43.042 10:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:16:43.042 10:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:43.042 10:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76520 00:16:43.042 10:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:43.042 10:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:43.042 killing process with pid 76520 00:16:43.042 10:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76520' 00:16:43.042 10:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 76520 00:16:43.042 10:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 76520 00:16:43.300 10:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:43.300 10:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:43.300 10:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:43.300 10:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:16:43.300 10:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:16:43.300 10:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:43.300 10:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:16:43.300 10:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:43.300 10:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:43.300 10:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:43.300 10:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:43.300 10:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:43.300 10:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:43.300 10:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:43.558 10:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:43.558 10:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:43.558 10:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:43.558 10:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:43.558 10:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:43.558 10:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:43.558 10:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:43.558 10:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:43.558 10:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:43.558 10:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:43.558 10:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:43.558 10:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:43.558 10:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:16:43.558 00:16:43.558 real 0m42.502s 00:16:43.558 user 2m17.758s 00:16:43.558 sys 0m12.421s 00:16:43.558 10:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:43.558 ************************************ 00:16:43.558 END TEST nvmf_host_multipath_status 00:16:43.558 10:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:43.558 ************************************ 00:16:43.558 10:34:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:43.558 10:34:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:43.558 10:34:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:43.558 10:34:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.558 ************************************ 00:16:43.558 START TEST nvmf_discovery_remove_ifc 00:16:43.558 ************************************ 00:16:43.558 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:43.817 * Looking for test storage... 00:16:43.817 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:43.817 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:43.817 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:16:43.817 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:43.817 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:43.817 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:43.817 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:43.817 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:43.817 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:16:43.817 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:16:43.817 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:16:43.817 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:16:43.817 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:16:43.817 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:16:43.817 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:16:43.817 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:43.817 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:16:43.817 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:16:43.817 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:43.817 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:43.817 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:16:43.817 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:16:43.817 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:43.817 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:16:43.817 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:43.817 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:16:43.817 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:16:43.817 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:43.817 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:16:43.817 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:43.817 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:43.817 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:43.817 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:16:43.817 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:43.817 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:43.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.817 --rc genhtml_branch_coverage=1 00:16:43.817 --rc genhtml_function_coverage=1 00:16:43.817 --rc genhtml_legend=1 00:16:43.817 --rc geninfo_all_blocks=1 00:16:43.817 --rc geninfo_unexecuted_blocks=1 00:16:43.817 00:16:43.817 ' 00:16:43.817 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:43.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.817 --rc genhtml_branch_coverage=1 00:16:43.817 --rc genhtml_function_coverage=1 00:16:43.817 --rc genhtml_legend=1 00:16:43.817 --rc geninfo_all_blocks=1 00:16:43.817 --rc geninfo_unexecuted_blocks=1 00:16:43.817 00:16:43.817 ' 00:16:43.817 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:43.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.817 --rc genhtml_branch_coverage=1 00:16:43.817 --rc genhtml_function_coverage=1 00:16:43.817 --rc genhtml_legend=1 00:16:43.817 --rc geninfo_all_blocks=1 00:16:43.817 --rc geninfo_unexecuted_blocks=1 00:16:43.817 00:16:43.817 ' 00:16:43.817 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:43.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.817 --rc genhtml_branch_coverage=1 00:16:43.817 --rc genhtml_function_coverage=1 00:16:43.817 --rc genhtml_legend=1 00:16:43.817 --rc geninfo_all_blocks=1 00:16:43.817 --rc geninfo_unexecuted_blocks=1 00:16:43.817 00:16:43.817 ' 00:16:43.817 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:43.817 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:16:43.817 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:43.817 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:43.817 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:43.817 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:43.817 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:43.817 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:43.817 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:43.817 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:43.817 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:43.817 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:43.817 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:16:43.817 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:16:43.817 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:43.817 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:43.817 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:43.817 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:43.817 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:43.817 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:16:43.817 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:43.817 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:43.817 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:43.818 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.818 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.818 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.818 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:16:43.818 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.818 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:16:43.818 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:43.818 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:43.818 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:43.818 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:43.818 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:43.818 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:43.818 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:43.818 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:43.818 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:43.818 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:43.818 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:16:43.818 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:16:43.818 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:16:43.818 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:43.818 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:16:43.818 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:16:43.818 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:16:43.818 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:43.818 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:43.818 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:43.818 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:43.818 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:43.818 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:43.818 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:43.818 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:43.818 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:43.818 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:43.818 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:43.818 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:43.818 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:43.818 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:43.818 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:43.818 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:43.818 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:43.818 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:43.818 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:43.818 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:43.818 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:43.818 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:43.818 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:43.818 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:43.818 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:43.818 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:43.818 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:43.818 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:43.818 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:43.818 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:43.818 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:43.818 Cannot find device "nvmf_init_br" 00:16:43.818 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:16:43.818 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:43.818 Cannot find device "nvmf_init_br2" 00:16:43.818 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:16:43.818 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:43.818 Cannot find device "nvmf_tgt_br" 00:16:43.818 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:16:43.818 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:43.818 Cannot find device "nvmf_tgt_br2" 00:16:43.818 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:16:43.818 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:43.818 Cannot find device "nvmf_init_br" 00:16:43.818 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:16:43.818 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:43.818 Cannot find device "nvmf_init_br2" 00:16:43.818 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:16:43.818 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:44.075 Cannot find device "nvmf_tgt_br" 00:16:44.075 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:16:44.075 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:44.075 Cannot find device "nvmf_tgt_br2" 00:16:44.075 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:16:44.075 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:44.075 Cannot find device "nvmf_br" 00:16:44.075 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:16:44.075 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:44.075 Cannot find device "nvmf_init_if" 00:16:44.075 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:16:44.075 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:44.075 Cannot find device "nvmf_init_if2" 00:16:44.075 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:16:44.075 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:44.075 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:44.075 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:16:44.075 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:44.075 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:44.075 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:16:44.075 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:44.075 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:44.075 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:44.075 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:44.075 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:44.075 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:44.075 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:44.075 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:44.075 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:44.075 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:44.075 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:44.075 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:44.075 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:44.075 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:44.075 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:44.075 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:44.075 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:44.075 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:44.075 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:44.075 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:44.075 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:44.075 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:44.075 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:44.075 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:44.075 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:44.075 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:44.332 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:44.332 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:44.333 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:44.333 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:44.333 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:44.333 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:44.333 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:44.333 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:44.333 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.111 ms 00:16:44.333 00:16:44.333 --- 10.0.0.3 ping statistics --- 00:16:44.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:44.333 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:16:44.333 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:44.333 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:44.333 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.067 ms 00:16:44.333 00:16:44.333 --- 10.0.0.4 ping statistics --- 00:16:44.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:44.333 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:16:44.333 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:44.333 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:44.333 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:16:44.333 00:16:44.333 --- 10.0.0.1 ping statistics --- 00:16:44.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:44.333 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:16:44.333 10:34:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:44.333 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:44.333 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:16:44.333 00:16:44.333 --- 10.0.0.2 ping statistics --- 00:16:44.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:44.333 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:16:44.333 10:34:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:44.333 10:34:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:16:44.333 10:34:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:44.333 10:34:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:44.333 10:34:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:44.333 10:34:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:44.333 10:34:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:44.333 10:34:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:44.333 10:34:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:44.333 10:34:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:16:44.333 10:34:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:44.333 10:34:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:44.333 10:34:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:44.333 10:34:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=77438 00:16:44.333 10:34:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:44.333 10:34:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 77438 00:16:44.333 10:34:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 77438 ']' 00:16:44.333 10:34:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:44.333 10:34:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:44.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:44.333 10:34:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:44.333 10:34:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:44.333 10:34:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:44.333 [2024-11-15 10:34:45.094929] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:16:44.333 [2024-11-15 10:34:45.095009] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:44.591 [2024-11-15 10:34:45.241288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:44.591 [2024-11-15 10:34:45.302067] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:44.591 [2024-11-15 10:34:45.302128] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:44.591 [2024-11-15 10:34:45.302140] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:44.591 [2024-11-15 10:34:45.302149] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:44.591 [2024-11-15 10:34:45.302156] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:44.591 [2024-11-15 10:34:45.302548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:44.591 [2024-11-15 10:34:45.354854] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:45.526 10:34:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:45.527 10:34:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:16:45.527 10:34:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:45.527 10:34:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:45.527 10:34:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:45.527 10:34:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:45.527 10:34:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:16:45.527 10:34:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.527 10:34:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:45.527 [2024-11-15 10:34:46.141804] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:45.527 [2024-11-15 10:34:46.153914] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:16:45.527 null0 00:16:45.527 [2024-11-15 10:34:46.185910] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:45.527 10:34:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.527 10:34:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=77476 00:16:45.527 10:34:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:16:45.527 10:34:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 77476 /tmp/host.sock 00:16:45.527 10:34:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 77476 ']' 00:16:45.527 10:34:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:16:45.527 10:34:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:45.527 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:16:45.527 10:34:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:16:45.527 10:34:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:45.527 10:34:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:45.527 [2024-11-15 10:34:46.283928] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:16:45.527 [2024-11-15 10:34:46.284084] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77476 ] 00:16:45.785 [2024-11-15 10:34:46.433802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:45.785 [2024-11-15 10:34:46.503753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.720 10:34:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:46.720 10:34:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:16:46.720 10:34:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:46.720 10:34:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:16:46.720 10:34:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.720 10:34:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:46.720 10:34:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.720 10:34:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:16:46.720 10:34:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.720 10:34:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:46.720 [2024-11-15 10:34:47.334587] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:46.720 10:34:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.720 10:34:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:16:46.720 10:34:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.720 10:34:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:47.657 [2024-11-15 10:34:48.397142] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:16:47.657 [2024-11-15 10:34:48.397190] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:16:47.657 [2024-11-15 10:34:48.397214] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:16:47.657 [2024-11-15 10:34:48.403188] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:16:47.657 [2024-11-15 10:34:48.457658] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:16:47.657 [2024-11-15 10:34:48.458869] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1a53fb0:1 started. 00:16:47.657 [2024-11-15 10:34:48.460692] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:16:47.657 [2024-11-15 10:34:48.460756] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:16:47.657 [2024-11-15 10:34:48.460785] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:16:47.657 [2024-11-15 10:34:48.460804] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:16:47.657 [2024-11-15 10:34:48.460831] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:16:47.657 10:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.657 10:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:16:47.657 10:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:47.657 10:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:47.657 10:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.657 [2024-11-15 10:34:48.465832] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1a53fb0 was disconnected and freed. delete nvme_qpair. 00:16:47.657 10:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:47.657 10:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:47.657 10:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:47.657 10:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:47.657 10:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.916 10:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:16:47.916 10:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:16:47.916 10:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:16:47.916 10:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:16:47.916 10:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:47.916 10:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:47.916 10:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:47.916 10:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.916 10:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:47.916 10:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:47.916 10:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:47.916 10:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.916 10:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:47.916 10:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:48.852 10:34:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:48.852 10:34:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:48.852 10:34:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.852 10:34:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:48.852 10:34:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:48.852 10:34:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:48.852 10:34:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:48.852 10:34:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.852 10:34:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:48.852 10:34:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:49.788 10:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:49.788 10:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:49.788 10:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:49.788 10:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:49.788 10:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.788 10:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:49.788 10:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:50.063 10:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.063 10:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:50.063 10:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:51.032 10:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:51.032 10:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:51.032 10:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.032 10:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:51.032 10:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:51.032 10:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:51.032 10:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:51.032 10:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.032 10:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:51.032 10:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:51.966 10:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:51.966 10:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:51.966 10:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:51.966 10:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.966 10:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:51.966 10:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:51.966 10:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:51.966 10:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.966 10:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:51.966 10:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:53.342 10:34:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:53.342 10:34:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:53.342 10:34:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:53.342 10:34:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:53.342 10:34:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.343 10:34:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:53.343 10:34:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:53.343 10:34:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.343 10:34:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:53.343 10:34:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:53.343 [2024-11-15 10:34:53.888591] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:16:53.343 [2024-11-15 10:34:53.888682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:53.343 [2024-11-15 10:34:53.888700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.343 [2024-11-15 10:34:53.888713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:53.343 [2024-11-15 10:34:53.888723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.343 [2024-11-15 10:34:53.888733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:53.343 [2024-11-15 10:34:53.888744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.343 [2024-11-15 10:34:53.888754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:53.343 [2024-11-15 10:34:53.888764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.343 [2024-11-15 10:34:53.888774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:53.343 [2024-11-15 10:34:53.888784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.343 [2024-11-15 10:34:53.888793] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a30240 is same with the state(6) to be set 00:16:53.343 [2024-11-15 10:34:53.898580] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a30240 (9): Bad file descriptor 00:16:53.343 [2024-11-15 10:34:53.908609] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:16:53.343 [2024-11-15 10:34:53.908646] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:16:53.343 [2024-11-15 10:34:53.908657] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:16:53.343 [2024-11-15 10:34:53.908672] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:16:53.343 [2024-11-15 10:34:53.908713] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:16:54.279 10:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:54.279 10:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:54.279 10:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.279 10:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:54.279 10:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:54.279 10:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:54.279 10:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:54.279 [2024-11-15 10:34:54.927177] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:16:54.279 [2024-11-15 10:34:54.927328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a30240 with addr=10.0.0.3, port=4420 00:16:54.279 [2024-11-15 10:34:54.927361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a30240 is same with the state(6) to be set 00:16:54.279 [2024-11-15 10:34:54.927446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a30240 (9): Bad file descriptor 00:16:54.279 [2024-11-15 10:34:54.928218] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:16:54.279 [2024-11-15 10:34:54.928291] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:16:54.279 [2024-11-15 10:34:54.928311] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:16:54.279 [2024-11-15 10:34:54.928329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:16:54.279 [2024-11-15 10:34:54.928348] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:16:54.279 [2024-11-15 10:34:54.928359] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:16:54.279 [2024-11-15 10:34:54.928369] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:16:54.279 [2024-11-15 10:34:54.928386] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:16:54.279 [2024-11-15 10:34:54.928396] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:16:54.279 10:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.279 10:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:54.279 10:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:55.214 [2024-11-15 10:34:55.928455] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:16:55.214 [2024-11-15 10:34:55.928518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:16:55.214 [2024-11-15 10:34:55.928561] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:16:55.214 [2024-11-15 10:34:55.928573] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:16:55.214 [2024-11-15 10:34:55.928585] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:16:55.214 [2024-11-15 10:34:55.928595] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:16:55.214 [2024-11-15 10:34:55.928601] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:16:55.214 [2024-11-15 10:34:55.928607] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:16:55.214 [2024-11-15 10:34:55.928651] bdev_nvme.c:7135:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:16:55.214 [2024-11-15 10:34:55.928710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:55.214 [2024-11-15 10:34:55.928732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.214 [2024-11-15 10:34:55.928747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:55.214 [2024-11-15 10:34:55.928757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.214 [2024-11-15 10:34:55.928767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:55.214 [2024-11-15 10:34:55.928777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.214 [2024-11-15 10:34:55.928787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:55.214 [2024-11-15 10:34:55.928796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.214 [2024-11-15 10:34:55.928806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:55.214 [2024-11-15 10:34:55.928815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.214 [2024-11-15 10:34:55.928825] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:16:55.214 [2024-11-15 10:34:55.928847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19bba20 (9): Bad file descriptor 00:16:55.214 [2024-11-15 10:34:55.929531] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:16:55.214 [2024-11-15 10:34:55.929555] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:16:55.214 10:34:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:55.214 10:34:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:55.214 10:34:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:55.214 10:34:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.214 10:34:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:55.214 10:34:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:55.214 10:34:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:55.214 10:34:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.214 10:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:16:55.214 10:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:55.214 10:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:55.214 10:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:16:55.214 10:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:55.214 10:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:55.214 10:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.214 10:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:55.214 10:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:55.214 10:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:55.214 10:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:55.214 10:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.473 10:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:16:55.473 10:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:56.406 10:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:56.406 10:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:56.406 10:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:56.406 10:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.406 10:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:56.406 10:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:56.406 10:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:56.406 10:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.406 10:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:16:56.406 10:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:57.341 [2024-11-15 10:34:57.941905] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:16:57.341 [2024-11-15 10:34:57.942188] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:16:57.341 [2024-11-15 10:34:57.942227] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:16:57.341 [2024-11-15 10:34:57.947952] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:16:57.341 [2024-11-15 10:34:58.002414] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:16:57.341 [2024-11-15 10:34:58.003346] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1a0c9f0:1 started. 00:16:57.341 [2024-11-15 10:34:58.004703] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:16:57.341 [2024-11-15 10:34:58.004758] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:16:57.341 [2024-11-15 10:34:58.004783] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:16:57.341 [2024-11-15 10:34:58.004802] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:16:57.341 [2024-11-15 10:34:58.004813] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:16:57.341 [2024-11-15 10:34:58.010552] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1a0c9f0 was disconnected and freed. delete nvme_qpair. 00:16:57.341 10:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:57.341 10:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:57.341 10:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:57.341 10:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.341 10:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:57.341 10:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:57.341 10:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:57.341 10:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.598 10:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:16:57.598 10:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:16:57.599 10:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 77476 00:16:57.599 10:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 77476 ']' 00:16:57.599 10:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 77476 00:16:57.599 10:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:16:57.599 10:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:57.599 10:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 77476 00:16:57.599 killing process with pid 77476 00:16:57.599 10:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:57.599 10:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:57.599 10:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 77476' 00:16:57.599 10:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 77476 00:16:57.599 10:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 77476 00:16:57.599 10:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:16:57.599 10:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:57.599 10:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:16:57.858 10:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:57.858 10:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:16:57.858 10:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:57.858 10:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:57.858 rmmod nvme_tcp 00:16:57.858 rmmod nvme_fabrics 00:16:57.858 rmmod nvme_keyring 00:16:57.858 10:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:57.858 10:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:16:57.858 10:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:16:57.858 10:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 77438 ']' 00:16:57.858 10:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 77438 00:16:57.858 10:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 77438 ']' 00:16:57.858 10:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 77438 00:16:57.858 10:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:16:57.858 10:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:57.858 10:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 77438 00:16:57.858 killing process with pid 77438 00:16:57.858 10:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:16:57.858 10:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:16:57.858 10:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 77438' 00:16:57.858 10:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 77438 00:16:57.858 10:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 77438 00:16:58.117 10:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:58.117 10:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:58.117 10:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:58.117 10:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:16:58.117 10:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:58.117 10:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:16:58.117 10:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:16:58.117 10:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:58.117 10:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:58.117 10:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:58.117 10:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:58.117 10:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:58.117 10:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:58.117 10:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:58.117 10:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:58.117 10:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:58.117 10:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:58.117 10:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:58.117 10:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:58.117 10:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:58.117 10:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:58.117 10:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:58.117 10:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:58.117 10:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:58.117 10:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:58.117 10:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:58.376 10:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:16:58.376 00:16:58.376 real 0m14.610s 00:16:58.376 user 0m24.865s 00:16:58.376 sys 0m2.585s 00:16:58.376 ************************************ 00:16:58.376 END TEST nvmf_discovery_remove_ifc 00:16:58.376 ************************************ 00:16:58.376 10:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:58.376 10:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:58.376 10:34:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:16:58.376 10:34:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:58.376 10:34:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:58.376 10:34:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.376 ************************************ 00:16:58.376 START TEST nvmf_identify_kernel_target 00:16:58.376 ************************************ 00:16:58.376 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:16:58.376 * Looking for test storage... 00:16:58.376 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:58.376 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:58.377 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:58.377 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:16:58.377 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:58.377 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:58.377 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:58.377 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:58.377 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:16:58.377 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:16:58.377 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:16:58.377 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:16:58.377 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:16:58.377 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:16:58.377 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:16:58.377 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:58.377 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:16:58.377 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:16:58.377 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:58.377 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:58.377 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:16:58.377 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:16:58.377 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:58.377 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:16:58.377 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:16:58.377 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:16:58.377 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:16:58.377 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:58.377 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:16:58.377 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:16:58.377 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:58.377 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:58.377 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:16:58.377 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:58.377 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:58.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:58.377 --rc genhtml_branch_coverage=1 00:16:58.377 --rc genhtml_function_coverage=1 00:16:58.377 --rc genhtml_legend=1 00:16:58.377 --rc geninfo_all_blocks=1 00:16:58.377 --rc geninfo_unexecuted_blocks=1 00:16:58.377 00:16:58.377 ' 00:16:58.377 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:58.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:58.377 --rc genhtml_branch_coverage=1 00:16:58.377 --rc genhtml_function_coverage=1 00:16:58.377 --rc genhtml_legend=1 00:16:58.377 --rc geninfo_all_blocks=1 00:16:58.377 --rc geninfo_unexecuted_blocks=1 00:16:58.377 00:16:58.377 ' 00:16:58.377 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:58.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:58.377 --rc genhtml_branch_coverage=1 00:16:58.377 --rc genhtml_function_coverage=1 00:16:58.377 --rc genhtml_legend=1 00:16:58.377 --rc geninfo_all_blocks=1 00:16:58.377 --rc geninfo_unexecuted_blocks=1 00:16:58.377 00:16:58.377 ' 00:16:58.377 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:58.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:58.377 --rc genhtml_branch_coverage=1 00:16:58.377 --rc genhtml_function_coverage=1 00:16:58.377 --rc genhtml_legend=1 00:16:58.377 --rc geninfo_all_blocks=1 00:16:58.377 --rc geninfo_unexecuted_blocks=1 00:16:58.377 00:16:58.377 ' 00:16:58.377 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:58.377 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:16:58.637 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:58.637 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:58.637 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:58.637 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:58.637 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:58.637 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:58.637 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:58.637 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:58.637 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:58.637 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:58.637 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:16:58.637 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:16:58.637 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:58.637 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:58.637 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:58.637 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:58.637 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:58.637 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:16:58.637 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:58.637 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:58.637 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:58.637 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.637 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.637 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.637 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:16:58.637 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.637 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:16:58.637 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:58.637 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:58.637 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:58.637 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:58.637 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:58.637 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:58.637 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:58.637 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:58.637 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:58.637 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:58.637 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:16:58.637 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:58.637 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:58.637 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:58.637 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:58.637 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:58.637 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:58.637 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:58.637 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:58.637 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:58.637 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:58.637 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:58.637 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:58.637 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:58.637 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:58.637 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:58.637 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:58.637 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:58.637 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:58.638 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:58.638 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:58.638 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:58.638 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:58.638 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:58.638 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:58.638 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:58.638 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:58.638 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:58.638 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:58.638 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:58.638 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:58.638 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:58.638 Cannot find device "nvmf_init_br" 00:16:58.638 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:16:58.638 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:58.638 Cannot find device "nvmf_init_br2" 00:16:58.638 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:16:58.638 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:58.638 Cannot find device "nvmf_tgt_br" 00:16:58.638 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:16:58.638 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:58.638 Cannot find device "nvmf_tgt_br2" 00:16:58.638 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:16:58.638 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:58.638 Cannot find device "nvmf_init_br" 00:16:58.638 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:16:58.638 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:58.638 Cannot find device "nvmf_init_br2" 00:16:58.638 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:16:58.638 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:58.638 Cannot find device "nvmf_tgt_br" 00:16:58.638 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:16:58.638 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:58.638 Cannot find device "nvmf_tgt_br2" 00:16:58.638 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:16:58.638 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:58.638 Cannot find device "nvmf_br" 00:16:58.638 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:16:58.638 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:58.638 Cannot find device "nvmf_init_if" 00:16:58.638 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:16:58.638 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:58.638 Cannot find device "nvmf_init_if2" 00:16:58.638 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:16:58.638 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:58.638 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:58.638 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:16:58.638 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:58.638 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:58.638 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:16:58.638 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:58.638 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:58.638 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:58.638 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:58.638 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:58.638 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:58.638 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:58.638 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:58.638 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:58.638 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:58.638 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:58.638 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:58.897 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:58.897 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:58.897 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:58.898 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:58.898 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:58.898 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:58.898 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:58.898 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:58.898 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:58.898 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:58.898 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:58.898 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:58.898 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:58.898 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:58.898 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:58.898 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:58.898 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:58.898 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:58.898 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:58.898 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:58.898 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:58.898 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:58.898 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:16:58.898 00:16:58.898 --- 10.0.0.3 ping statistics --- 00:16:58.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:58.898 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:16:58.898 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:58.898 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:58.898 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms 00:16:58.898 00:16:58.898 --- 10.0.0.4 ping statistics --- 00:16:58.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:58.898 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:16:58.898 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:58.898 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:58.898 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:16:58.898 00:16:58.898 --- 10.0.0.1 ping statistics --- 00:16:58.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:58.898 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:16:58.898 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:58.898 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:58.898 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:16:58.898 00:16:58.898 --- 10.0.0.2 ping statistics --- 00:16:58.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:58.898 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:16:58.898 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:58.898 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:16:58.898 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:58.898 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:58.898 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:58.898 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:58.898 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:58.898 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:58.898 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:58.898 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:16:58.898 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:16:58.898 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:16:58.898 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:58.899 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:58.899 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:58.899 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:58.899 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:58.899 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:58.899 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:58.899 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:58.899 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:58.899 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:16:58.899 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:16:58.899 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:16:58.899 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:16:58.899 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:58.899 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:58.899 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:16:58.899 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:16:58.899 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:16:58.899 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:16:58.899 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:16:58.899 10:34:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:59.158 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:59.158 Waiting for block devices as requested 00:16:59.416 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:59.416 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:59.416 10:35:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:16:59.416 10:35:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:16:59.416 10:35:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:16:59.416 10:35:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:16:59.416 10:35:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:59.416 10:35:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:59.416 10:35:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:16:59.416 10:35:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:16:59.416 10:35:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:16:59.674 No valid GPT data, bailing 00:16:59.674 10:35:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:16:59.674 10:35:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:16:59.674 10:35:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:16:59.674 10:35:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:16:59.674 10:35:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:16:59.674 10:35:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:16:59.674 10:35:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:16:59.674 10:35:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:16:59.674 10:35:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:16:59.674 10:35:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:59.674 10:35:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:16:59.674 10:35:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:16:59.674 10:35:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:16:59.674 No valid GPT data, bailing 00:16:59.674 10:35:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:16:59.674 10:35:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:16:59.674 10:35:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:16:59.674 10:35:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:16:59.674 10:35:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:16:59.674 10:35:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:16:59.674 10:35:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:16:59.674 10:35:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:16:59.675 10:35:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:16:59.675 10:35:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:59.675 10:35:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:16:59.675 10:35:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:16:59.675 10:35:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:16:59.675 No valid GPT data, bailing 00:16:59.675 10:35:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:16:59.675 10:35:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:16:59.675 10:35:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:16:59.675 10:35:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:16:59.675 10:35:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:16:59.675 10:35:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:16:59.675 10:35:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:16:59.675 10:35:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:16:59.675 10:35:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:16:59.675 10:35:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:59.675 10:35:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:16:59.675 10:35:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:16:59.675 10:35:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:16:59.675 No valid GPT data, bailing 00:16:59.933 10:35:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:16:59.933 10:35:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:16:59.933 10:35:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:16:59.933 10:35:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:16:59.933 10:35:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:16:59.933 10:35:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:59.933 10:35:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:59.933 10:35:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:16:59.933 10:35:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:16:59.933 10:35:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:16:59.933 10:35:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:16:59.933 10:35:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:16:59.933 10:35:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:16:59.933 10:35:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:16:59.933 10:35:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:16:59.933 10:35:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:16:59.933 10:35:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:16:59.933 10:35:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid=b4733420-cf17-49bc-adb6-f89fe6fa7a33 -a 10.0.0.1 -t tcp -s 4420 00:16:59.933 00:16:59.933 Discovery Log Number of Records 2, Generation counter 2 00:16:59.933 =====Discovery Log Entry 0====== 00:16:59.933 trtype: tcp 00:16:59.933 adrfam: ipv4 00:16:59.933 subtype: current discovery subsystem 00:16:59.933 treq: not specified, sq flow control disable supported 00:16:59.933 portid: 1 00:16:59.933 trsvcid: 4420 00:16:59.933 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:59.933 traddr: 10.0.0.1 00:16:59.933 eflags: none 00:16:59.933 sectype: none 00:16:59.933 =====Discovery Log Entry 1====== 00:16:59.933 trtype: tcp 00:16:59.933 adrfam: ipv4 00:16:59.933 subtype: nvme subsystem 00:16:59.933 treq: not specified, sq flow control disable supported 00:16:59.933 portid: 1 00:16:59.933 trsvcid: 4420 00:16:59.934 subnqn: nqn.2016-06.io.spdk:testnqn 00:16:59.934 traddr: 10.0.0.1 00:16:59.934 eflags: none 00:16:59.934 sectype: none 00:16:59.934 10:35:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:16:59.934 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:17:00.193 ===================================================== 00:17:00.193 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:17:00.193 ===================================================== 00:17:00.193 Controller Capabilities/Features 00:17:00.193 ================================ 00:17:00.193 Vendor ID: 0000 00:17:00.193 Subsystem Vendor ID: 0000 00:17:00.193 Serial Number: e74cf637249aa7344f45 00:17:00.193 Model Number: Linux 00:17:00.193 Firmware Version: 6.8.9-20 00:17:00.193 Recommended Arb Burst: 0 00:17:00.193 IEEE OUI Identifier: 00 00 00 00:17:00.193 Multi-path I/O 00:17:00.193 May have multiple subsystem ports: No 00:17:00.193 May have multiple controllers: No 00:17:00.193 Associated with SR-IOV VF: No 00:17:00.193 Max Data Transfer Size: Unlimited 00:17:00.193 Max Number of Namespaces: 0 00:17:00.193 Max Number of I/O Queues: 1024 00:17:00.193 NVMe Specification Version (VS): 1.3 00:17:00.193 NVMe Specification Version (Identify): 1.3 00:17:00.193 Maximum Queue Entries: 1024 00:17:00.193 Contiguous Queues Required: No 00:17:00.193 Arbitration Mechanisms Supported 00:17:00.193 Weighted Round Robin: Not Supported 00:17:00.193 Vendor Specific: Not Supported 00:17:00.193 Reset Timeout: 7500 ms 00:17:00.193 Doorbell Stride: 4 bytes 00:17:00.193 NVM Subsystem Reset: Not Supported 00:17:00.193 Command Sets Supported 00:17:00.193 NVM Command Set: Supported 00:17:00.193 Boot Partition: Not Supported 00:17:00.193 Memory Page Size Minimum: 4096 bytes 00:17:00.193 Memory Page Size Maximum: 4096 bytes 00:17:00.193 Persistent Memory Region: Not Supported 00:17:00.193 Optional Asynchronous Events Supported 00:17:00.193 Namespace Attribute Notices: Not Supported 00:17:00.193 Firmware Activation Notices: Not Supported 00:17:00.193 ANA Change Notices: Not Supported 00:17:00.193 PLE Aggregate Log Change Notices: Not Supported 00:17:00.193 LBA Status Info Alert Notices: Not Supported 00:17:00.193 EGE Aggregate Log Change Notices: Not Supported 00:17:00.193 Normal NVM Subsystem Shutdown event: Not Supported 00:17:00.193 Zone Descriptor Change Notices: Not Supported 00:17:00.193 Discovery Log Change Notices: Supported 00:17:00.193 Controller Attributes 00:17:00.193 128-bit Host Identifier: Not Supported 00:17:00.193 Non-Operational Permissive Mode: Not Supported 00:17:00.193 NVM Sets: Not Supported 00:17:00.193 Read Recovery Levels: Not Supported 00:17:00.193 Endurance Groups: Not Supported 00:17:00.193 Predictable Latency Mode: Not Supported 00:17:00.193 Traffic Based Keep ALive: Not Supported 00:17:00.193 Namespace Granularity: Not Supported 00:17:00.193 SQ Associations: Not Supported 00:17:00.193 UUID List: Not Supported 00:17:00.193 Multi-Domain Subsystem: Not Supported 00:17:00.193 Fixed Capacity Management: Not Supported 00:17:00.193 Variable Capacity Management: Not Supported 00:17:00.193 Delete Endurance Group: Not Supported 00:17:00.193 Delete NVM Set: Not Supported 00:17:00.193 Extended LBA Formats Supported: Not Supported 00:17:00.193 Flexible Data Placement Supported: Not Supported 00:17:00.193 00:17:00.193 Controller Memory Buffer Support 00:17:00.193 ================================ 00:17:00.193 Supported: No 00:17:00.193 00:17:00.193 Persistent Memory Region Support 00:17:00.193 ================================ 00:17:00.193 Supported: No 00:17:00.193 00:17:00.193 Admin Command Set Attributes 00:17:00.193 ============================ 00:17:00.193 Security Send/Receive: Not Supported 00:17:00.193 Format NVM: Not Supported 00:17:00.193 Firmware Activate/Download: Not Supported 00:17:00.193 Namespace Management: Not Supported 00:17:00.193 Device Self-Test: Not Supported 00:17:00.193 Directives: Not Supported 00:17:00.193 NVMe-MI: Not Supported 00:17:00.193 Virtualization Management: Not Supported 00:17:00.193 Doorbell Buffer Config: Not Supported 00:17:00.193 Get LBA Status Capability: Not Supported 00:17:00.193 Command & Feature Lockdown Capability: Not Supported 00:17:00.193 Abort Command Limit: 1 00:17:00.193 Async Event Request Limit: 1 00:17:00.193 Number of Firmware Slots: N/A 00:17:00.193 Firmware Slot 1 Read-Only: N/A 00:17:00.193 Firmware Activation Without Reset: N/A 00:17:00.193 Multiple Update Detection Support: N/A 00:17:00.193 Firmware Update Granularity: No Information Provided 00:17:00.193 Per-Namespace SMART Log: No 00:17:00.193 Asymmetric Namespace Access Log Page: Not Supported 00:17:00.193 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:17:00.193 Command Effects Log Page: Not Supported 00:17:00.193 Get Log Page Extended Data: Supported 00:17:00.193 Telemetry Log Pages: Not Supported 00:17:00.193 Persistent Event Log Pages: Not Supported 00:17:00.193 Supported Log Pages Log Page: May Support 00:17:00.193 Commands Supported & Effects Log Page: Not Supported 00:17:00.193 Feature Identifiers & Effects Log Page:May Support 00:17:00.193 NVMe-MI Commands & Effects Log Page: May Support 00:17:00.193 Data Area 4 for Telemetry Log: Not Supported 00:17:00.193 Error Log Page Entries Supported: 1 00:17:00.193 Keep Alive: Not Supported 00:17:00.193 00:17:00.193 NVM Command Set Attributes 00:17:00.193 ========================== 00:17:00.193 Submission Queue Entry Size 00:17:00.193 Max: 1 00:17:00.193 Min: 1 00:17:00.193 Completion Queue Entry Size 00:17:00.193 Max: 1 00:17:00.193 Min: 1 00:17:00.193 Number of Namespaces: 0 00:17:00.193 Compare Command: Not Supported 00:17:00.193 Write Uncorrectable Command: Not Supported 00:17:00.193 Dataset Management Command: Not Supported 00:17:00.193 Write Zeroes Command: Not Supported 00:17:00.193 Set Features Save Field: Not Supported 00:17:00.193 Reservations: Not Supported 00:17:00.193 Timestamp: Not Supported 00:17:00.193 Copy: Not Supported 00:17:00.193 Volatile Write Cache: Not Present 00:17:00.193 Atomic Write Unit (Normal): 1 00:17:00.193 Atomic Write Unit (PFail): 1 00:17:00.193 Atomic Compare & Write Unit: 1 00:17:00.194 Fused Compare & Write: Not Supported 00:17:00.194 Scatter-Gather List 00:17:00.194 SGL Command Set: Supported 00:17:00.194 SGL Keyed: Not Supported 00:17:00.194 SGL Bit Bucket Descriptor: Not Supported 00:17:00.194 SGL Metadata Pointer: Not Supported 00:17:00.194 Oversized SGL: Not Supported 00:17:00.194 SGL Metadata Address: Not Supported 00:17:00.194 SGL Offset: Supported 00:17:00.194 Transport SGL Data Block: Not Supported 00:17:00.194 Replay Protected Memory Block: Not Supported 00:17:00.194 00:17:00.194 Firmware Slot Information 00:17:00.194 ========================= 00:17:00.194 Active slot: 0 00:17:00.194 00:17:00.194 00:17:00.194 Error Log 00:17:00.194 ========= 00:17:00.194 00:17:00.194 Active Namespaces 00:17:00.194 ================= 00:17:00.194 Discovery Log Page 00:17:00.194 ================== 00:17:00.194 Generation Counter: 2 00:17:00.194 Number of Records: 2 00:17:00.194 Record Format: 0 00:17:00.194 00:17:00.194 Discovery Log Entry 0 00:17:00.194 ---------------------- 00:17:00.194 Transport Type: 3 (TCP) 00:17:00.194 Address Family: 1 (IPv4) 00:17:00.194 Subsystem Type: 3 (Current Discovery Subsystem) 00:17:00.194 Entry Flags: 00:17:00.194 Duplicate Returned Information: 0 00:17:00.194 Explicit Persistent Connection Support for Discovery: 0 00:17:00.194 Transport Requirements: 00:17:00.194 Secure Channel: Not Specified 00:17:00.194 Port ID: 1 (0x0001) 00:17:00.194 Controller ID: 65535 (0xffff) 00:17:00.194 Admin Max SQ Size: 32 00:17:00.194 Transport Service Identifier: 4420 00:17:00.194 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:17:00.194 Transport Address: 10.0.0.1 00:17:00.194 Discovery Log Entry 1 00:17:00.194 ---------------------- 00:17:00.194 Transport Type: 3 (TCP) 00:17:00.194 Address Family: 1 (IPv4) 00:17:00.194 Subsystem Type: 2 (NVM Subsystem) 00:17:00.194 Entry Flags: 00:17:00.194 Duplicate Returned Information: 0 00:17:00.194 Explicit Persistent Connection Support for Discovery: 0 00:17:00.194 Transport Requirements: 00:17:00.194 Secure Channel: Not Specified 00:17:00.194 Port ID: 1 (0x0001) 00:17:00.194 Controller ID: 65535 (0xffff) 00:17:00.194 Admin Max SQ Size: 32 00:17:00.194 Transport Service Identifier: 4420 00:17:00.194 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:17:00.194 Transport Address: 10.0.0.1 00:17:00.194 10:35:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:17:00.194 get_feature(0x01) failed 00:17:00.194 get_feature(0x02) failed 00:17:00.194 get_feature(0x04) failed 00:17:00.194 ===================================================== 00:17:00.194 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:17:00.194 ===================================================== 00:17:00.194 Controller Capabilities/Features 00:17:00.194 ================================ 00:17:00.194 Vendor ID: 0000 00:17:00.194 Subsystem Vendor ID: 0000 00:17:00.194 Serial Number: 763b1ebcfeb7bea27c4f 00:17:00.194 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:17:00.194 Firmware Version: 6.8.9-20 00:17:00.194 Recommended Arb Burst: 6 00:17:00.194 IEEE OUI Identifier: 00 00 00 00:17:00.194 Multi-path I/O 00:17:00.194 May have multiple subsystem ports: Yes 00:17:00.194 May have multiple controllers: Yes 00:17:00.194 Associated with SR-IOV VF: No 00:17:00.194 Max Data Transfer Size: Unlimited 00:17:00.194 Max Number of Namespaces: 1024 00:17:00.194 Max Number of I/O Queues: 128 00:17:00.194 NVMe Specification Version (VS): 1.3 00:17:00.194 NVMe Specification Version (Identify): 1.3 00:17:00.194 Maximum Queue Entries: 1024 00:17:00.194 Contiguous Queues Required: No 00:17:00.194 Arbitration Mechanisms Supported 00:17:00.194 Weighted Round Robin: Not Supported 00:17:00.194 Vendor Specific: Not Supported 00:17:00.194 Reset Timeout: 7500 ms 00:17:00.194 Doorbell Stride: 4 bytes 00:17:00.194 NVM Subsystem Reset: Not Supported 00:17:00.194 Command Sets Supported 00:17:00.194 NVM Command Set: Supported 00:17:00.194 Boot Partition: Not Supported 00:17:00.194 Memory Page Size Minimum: 4096 bytes 00:17:00.194 Memory Page Size Maximum: 4096 bytes 00:17:00.194 Persistent Memory Region: Not Supported 00:17:00.194 Optional Asynchronous Events Supported 00:17:00.194 Namespace Attribute Notices: Supported 00:17:00.194 Firmware Activation Notices: Not Supported 00:17:00.194 ANA Change Notices: Supported 00:17:00.194 PLE Aggregate Log Change Notices: Not Supported 00:17:00.194 LBA Status Info Alert Notices: Not Supported 00:17:00.194 EGE Aggregate Log Change Notices: Not Supported 00:17:00.194 Normal NVM Subsystem Shutdown event: Not Supported 00:17:00.194 Zone Descriptor Change Notices: Not Supported 00:17:00.194 Discovery Log Change Notices: Not Supported 00:17:00.194 Controller Attributes 00:17:00.194 128-bit Host Identifier: Supported 00:17:00.194 Non-Operational Permissive Mode: Not Supported 00:17:00.194 NVM Sets: Not Supported 00:17:00.194 Read Recovery Levels: Not Supported 00:17:00.194 Endurance Groups: Not Supported 00:17:00.194 Predictable Latency Mode: Not Supported 00:17:00.194 Traffic Based Keep ALive: Supported 00:17:00.194 Namespace Granularity: Not Supported 00:17:00.194 SQ Associations: Not Supported 00:17:00.194 UUID List: Not Supported 00:17:00.194 Multi-Domain Subsystem: Not Supported 00:17:00.194 Fixed Capacity Management: Not Supported 00:17:00.194 Variable Capacity Management: Not Supported 00:17:00.194 Delete Endurance Group: Not Supported 00:17:00.194 Delete NVM Set: Not Supported 00:17:00.194 Extended LBA Formats Supported: Not Supported 00:17:00.194 Flexible Data Placement Supported: Not Supported 00:17:00.194 00:17:00.194 Controller Memory Buffer Support 00:17:00.194 ================================ 00:17:00.194 Supported: No 00:17:00.194 00:17:00.194 Persistent Memory Region Support 00:17:00.194 ================================ 00:17:00.194 Supported: No 00:17:00.194 00:17:00.194 Admin Command Set Attributes 00:17:00.194 ============================ 00:17:00.194 Security Send/Receive: Not Supported 00:17:00.194 Format NVM: Not Supported 00:17:00.194 Firmware Activate/Download: Not Supported 00:17:00.194 Namespace Management: Not Supported 00:17:00.194 Device Self-Test: Not Supported 00:17:00.194 Directives: Not Supported 00:17:00.194 NVMe-MI: Not Supported 00:17:00.194 Virtualization Management: Not Supported 00:17:00.194 Doorbell Buffer Config: Not Supported 00:17:00.194 Get LBA Status Capability: Not Supported 00:17:00.194 Command & Feature Lockdown Capability: Not Supported 00:17:00.194 Abort Command Limit: 4 00:17:00.194 Async Event Request Limit: 4 00:17:00.194 Number of Firmware Slots: N/A 00:17:00.194 Firmware Slot 1 Read-Only: N/A 00:17:00.194 Firmware Activation Without Reset: N/A 00:17:00.194 Multiple Update Detection Support: N/A 00:17:00.194 Firmware Update Granularity: No Information Provided 00:17:00.194 Per-Namespace SMART Log: Yes 00:17:00.194 Asymmetric Namespace Access Log Page: Supported 00:17:00.194 ANA Transition Time : 10 sec 00:17:00.194 00:17:00.194 Asymmetric Namespace Access Capabilities 00:17:00.194 ANA Optimized State : Supported 00:17:00.194 ANA Non-Optimized State : Supported 00:17:00.194 ANA Inaccessible State : Supported 00:17:00.194 ANA Persistent Loss State : Supported 00:17:00.194 ANA Change State : Supported 00:17:00.194 ANAGRPID is not changed : No 00:17:00.194 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:17:00.194 00:17:00.194 ANA Group Identifier Maximum : 128 00:17:00.194 Number of ANA Group Identifiers : 128 00:17:00.194 Max Number of Allowed Namespaces : 1024 00:17:00.194 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:17:00.194 Command Effects Log Page: Supported 00:17:00.195 Get Log Page Extended Data: Supported 00:17:00.195 Telemetry Log Pages: Not Supported 00:17:00.195 Persistent Event Log Pages: Not Supported 00:17:00.195 Supported Log Pages Log Page: May Support 00:17:00.195 Commands Supported & Effects Log Page: Not Supported 00:17:00.195 Feature Identifiers & Effects Log Page:May Support 00:17:00.195 NVMe-MI Commands & Effects Log Page: May Support 00:17:00.195 Data Area 4 for Telemetry Log: Not Supported 00:17:00.195 Error Log Page Entries Supported: 128 00:17:00.195 Keep Alive: Supported 00:17:00.195 Keep Alive Granularity: 1000 ms 00:17:00.195 00:17:00.195 NVM Command Set Attributes 00:17:00.195 ========================== 00:17:00.195 Submission Queue Entry Size 00:17:00.195 Max: 64 00:17:00.195 Min: 64 00:17:00.195 Completion Queue Entry Size 00:17:00.195 Max: 16 00:17:00.195 Min: 16 00:17:00.195 Number of Namespaces: 1024 00:17:00.195 Compare Command: Not Supported 00:17:00.195 Write Uncorrectable Command: Not Supported 00:17:00.195 Dataset Management Command: Supported 00:17:00.195 Write Zeroes Command: Supported 00:17:00.195 Set Features Save Field: Not Supported 00:17:00.195 Reservations: Not Supported 00:17:00.195 Timestamp: Not Supported 00:17:00.195 Copy: Not Supported 00:17:00.195 Volatile Write Cache: Present 00:17:00.195 Atomic Write Unit (Normal): 1 00:17:00.195 Atomic Write Unit (PFail): 1 00:17:00.195 Atomic Compare & Write Unit: 1 00:17:00.195 Fused Compare & Write: Not Supported 00:17:00.195 Scatter-Gather List 00:17:00.195 SGL Command Set: Supported 00:17:00.195 SGL Keyed: Not Supported 00:17:00.195 SGL Bit Bucket Descriptor: Not Supported 00:17:00.195 SGL Metadata Pointer: Not Supported 00:17:00.195 Oversized SGL: Not Supported 00:17:00.195 SGL Metadata Address: Not Supported 00:17:00.195 SGL Offset: Supported 00:17:00.195 Transport SGL Data Block: Not Supported 00:17:00.195 Replay Protected Memory Block: Not Supported 00:17:00.195 00:17:00.195 Firmware Slot Information 00:17:00.195 ========================= 00:17:00.195 Active slot: 0 00:17:00.195 00:17:00.195 Asymmetric Namespace Access 00:17:00.195 =========================== 00:17:00.195 Change Count : 0 00:17:00.195 Number of ANA Group Descriptors : 1 00:17:00.195 ANA Group Descriptor : 0 00:17:00.195 ANA Group ID : 1 00:17:00.195 Number of NSID Values : 1 00:17:00.195 Change Count : 0 00:17:00.195 ANA State : 1 00:17:00.195 Namespace Identifier : 1 00:17:00.195 00:17:00.195 Commands Supported and Effects 00:17:00.195 ============================== 00:17:00.195 Admin Commands 00:17:00.195 -------------- 00:17:00.195 Get Log Page (02h): Supported 00:17:00.195 Identify (06h): Supported 00:17:00.195 Abort (08h): Supported 00:17:00.195 Set Features (09h): Supported 00:17:00.195 Get Features (0Ah): Supported 00:17:00.195 Asynchronous Event Request (0Ch): Supported 00:17:00.195 Keep Alive (18h): Supported 00:17:00.195 I/O Commands 00:17:00.195 ------------ 00:17:00.195 Flush (00h): Supported 00:17:00.195 Write (01h): Supported LBA-Change 00:17:00.195 Read (02h): Supported 00:17:00.195 Write Zeroes (08h): Supported LBA-Change 00:17:00.195 Dataset Management (09h): Supported 00:17:00.195 00:17:00.195 Error Log 00:17:00.195 ========= 00:17:00.195 Entry: 0 00:17:00.195 Error Count: 0x3 00:17:00.195 Submission Queue Id: 0x0 00:17:00.195 Command Id: 0x5 00:17:00.195 Phase Bit: 0 00:17:00.195 Status Code: 0x2 00:17:00.195 Status Code Type: 0x0 00:17:00.195 Do Not Retry: 1 00:17:00.195 Error Location: 0x28 00:17:00.195 LBA: 0x0 00:17:00.195 Namespace: 0x0 00:17:00.195 Vendor Log Page: 0x0 00:17:00.195 ----------- 00:17:00.195 Entry: 1 00:17:00.195 Error Count: 0x2 00:17:00.195 Submission Queue Id: 0x0 00:17:00.195 Command Id: 0x5 00:17:00.195 Phase Bit: 0 00:17:00.195 Status Code: 0x2 00:17:00.195 Status Code Type: 0x0 00:17:00.195 Do Not Retry: 1 00:17:00.195 Error Location: 0x28 00:17:00.195 LBA: 0x0 00:17:00.195 Namespace: 0x0 00:17:00.195 Vendor Log Page: 0x0 00:17:00.195 ----------- 00:17:00.195 Entry: 2 00:17:00.195 Error Count: 0x1 00:17:00.195 Submission Queue Id: 0x0 00:17:00.195 Command Id: 0x4 00:17:00.195 Phase Bit: 0 00:17:00.195 Status Code: 0x2 00:17:00.195 Status Code Type: 0x0 00:17:00.195 Do Not Retry: 1 00:17:00.195 Error Location: 0x28 00:17:00.195 LBA: 0x0 00:17:00.195 Namespace: 0x0 00:17:00.195 Vendor Log Page: 0x0 00:17:00.195 00:17:00.195 Number of Queues 00:17:00.195 ================ 00:17:00.195 Number of I/O Submission Queues: 128 00:17:00.195 Number of I/O Completion Queues: 128 00:17:00.195 00:17:00.195 ZNS Specific Controller Data 00:17:00.195 ============================ 00:17:00.195 Zone Append Size Limit: 0 00:17:00.195 00:17:00.195 00:17:00.195 Active Namespaces 00:17:00.195 ================= 00:17:00.195 get_feature(0x05) failed 00:17:00.195 Namespace ID:1 00:17:00.195 Command Set Identifier: NVM (00h) 00:17:00.195 Deallocate: Supported 00:17:00.195 Deallocated/Unwritten Error: Not Supported 00:17:00.195 Deallocated Read Value: Unknown 00:17:00.195 Deallocate in Write Zeroes: Not Supported 00:17:00.195 Deallocated Guard Field: 0xFFFF 00:17:00.195 Flush: Supported 00:17:00.195 Reservation: Not Supported 00:17:00.195 Namespace Sharing Capabilities: Multiple Controllers 00:17:00.195 Size (in LBAs): 1310720 (5GiB) 00:17:00.195 Capacity (in LBAs): 1310720 (5GiB) 00:17:00.195 Utilization (in LBAs): 1310720 (5GiB) 00:17:00.195 UUID: 42ccd4b6-1bb1-4330-98b4-e5ade1aabf82 00:17:00.195 Thin Provisioning: Not Supported 00:17:00.195 Per-NS Atomic Units: Yes 00:17:00.195 Atomic Boundary Size (Normal): 0 00:17:00.195 Atomic Boundary Size (PFail): 0 00:17:00.195 Atomic Boundary Offset: 0 00:17:00.195 NGUID/EUI64 Never Reused: No 00:17:00.195 ANA group ID: 1 00:17:00.195 Namespace Write Protected: No 00:17:00.195 Number of LBA Formats: 1 00:17:00.195 Current LBA Format: LBA Format #00 00:17:00.195 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:17:00.195 00:17:00.195 10:35:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:17:00.195 10:35:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:00.195 10:35:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:17:00.454 10:35:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:00.454 10:35:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:17:00.454 10:35:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:00.454 10:35:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:00.454 rmmod nvme_tcp 00:17:00.454 rmmod nvme_fabrics 00:17:00.454 10:35:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:00.454 10:35:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:17:00.454 10:35:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:17:00.454 10:35:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:17:00.454 10:35:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:00.454 10:35:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:00.454 10:35:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:00.454 10:35:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:17:00.454 10:35:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:17:00.454 10:35:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:00.454 10:35:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:17:00.454 10:35:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:00.454 10:35:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:00.454 10:35:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:00.454 10:35:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:00.454 10:35:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:00.454 10:35:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:00.454 10:35:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:00.454 10:35:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:00.454 10:35:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:00.454 10:35:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:00.454 10:35:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:00.454 10:35:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:00.454 10:35:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:00.454 10:35:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:00.454 10:35:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:00.713 10:35:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:00.713 10:35:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:00.713 10:35:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:00.713 10:35:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:00.713 10:35:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:17:00.713 10:35:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:17:00.713 10:35:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:17:00.713 10:35:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:17:00.713 10:35:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:00.713 10:35:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:17:00.713 10:35:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:17:00.713 10:35:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:00.713 10:35:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:17:00.713 10:35:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:17:00.713 10:35:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:01.280 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:01.539 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:01.539 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:01.539 00:17:01.539 real 0m3.252s 00:17:01.539 user 0m1.151s 00:17:01.539 sys 0m1.470s 00:17:01.539 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:01.539 ************************************ 00:17:01.539 END TEST nvmf_identify_kernel_target 00:17:01.539 ************************************ 00:17:01.539 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.539 10:35:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:17:01.539 10:35:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:01.539 10:35:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:01.539 10:35:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.539 ************************************ 00:17:01.539 START TEST nvmf_auth_host 00:17:01.539 ************************************ 00:17:01.539 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:17:01.798 * Looking for test storage... 00:17:01.798 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:01.798 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:01.798 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:17:01.798 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:01.798 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:01.798 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:01.798 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:01.798 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:01.798 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:17:01.798 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:17:01.798 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:17:01.798 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:17:01.798 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:17:01.798 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:17:01.798 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:17:01.798 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:01.798 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:17:01.798 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:17:01.798 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:01.798 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:01.798 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:17:01.798 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:17:01.798 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:01.798 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:17:01.798 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:17:01.798 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:17:01.798 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:17:01.798 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:01.798 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:17:01.798 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:17:01.798 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:01.798 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:01.798 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:17:01.798 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:01.798 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:01.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.798 --rc genhtml_branch_coverage=1 00:17:01.798 --rc genhtml_function_coverage=1 00:17:01.798 --rc genhtml_legend=1 00:17:01.798 --rc geninfo_all_blocks=1 00:17:01.798 --rc geninfo_unexecuted_blocks=1 00:17:01.798 00:17:01.798 ' 00:17:01.798 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:01.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.798 --rc genhtml_branch_coverage=1 00:17:01.798 --rc genhtml_function_coverage=1 00:17:01.798 --rc genhtml_legend=1 00:17:01.798 --rc geninfo_all_blocks=1 00:17:01.798 --rc geninfo_unexecuted_blocks=1 00:17:01.798 00:17:01.798 ' 00:17:01.798 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:01.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.798 --rc genhtml_branch_coverage=1 00:17:01.798 --rc genhtml_function_coverage=1 00:17:01.798 --rc genhtml_legend=1 00:17:01.798 --rc geninfo_all_blocks=1 00:17:01.798 --rc geninfo_unexecuted_blocks=1 00:17:01.798 00:17:01.798 ' 00:17:01.798 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:01.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.798 --rc genhtml_branch_coverage=1 00:17:01.798 --rc genhtml_function_coverage=1 00:17:01.798 --rc genhtml_legend=1 00:17:01.798 --rc geninfo_all_blocks=1 00:17:01.798 --rc geninfo_unexecuted_blocks=1 00:17:01.798 00:17:01.798 ' 00:17:01.798 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:01.798 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:17:01.798 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:01.798 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:01.798 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:01.798 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:01.798 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:01.798 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:01.798 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:01.798 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:01.798 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:01.798 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:01.798 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:17:01.798 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:17:01.798 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:01.798 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:01.798 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:01.798 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:01.798 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:01.798 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:17:01.798 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:01.798 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:01.798 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:01.798 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.799 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.799 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.799 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:17:01.799 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.799 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:17:01.799 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:01.799 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:01.799 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:01.799 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:01.799 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:01.799 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:01.799 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:01.799 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:01.799 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:01.799 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:01.799 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:01.799 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:01.799 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:17:01.799 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:17:01.799 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:01.799 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:01.799 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:17:01.799 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:17:01.799 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:17:01.799 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:01.799 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:01.799 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:01.799 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:01.799 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:01.799 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:01.799 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:01.799 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:01.799 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:01.799 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:01.799 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:01.799 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:01.799 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:01.799 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:01.799 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:01.799 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:01.799 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:01.799 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:01.799 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:01.799 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:01.799 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:01.799 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:01.799 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:01.799 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:01.799 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:01.799 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:01.799 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:01.799 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:01.799 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:01.799 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:01.799 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:01.799 Cannot find device "nvmf_init_br" 00:17:01.799 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:17:01.799 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:01.799 Cannot find device "nvmf_init_br2" 00:17:01.799 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:17:01.799 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:01.799 Cannot find device "nvmf_tgt_br" 00:17:01.799 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:17:01.799 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:01.799 Cannot find device "nvmf_tgt_br2" 00:17:01.799 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:17:01.799 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:01.799 Cannot find device "nvmf_init_br" 00:17:01.799 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:17:01.799 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:01.799 Cannot find device "nvmf_init_br2" 00:17:01.799 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:17:01.799 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:02.058 Cannot find device "nvmf_tgt_br" 00:17:02.058 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:17:02.058 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:02.058 Cannot find device "nvmf_tgt_br2" 00:17:02.058 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:17:02.058 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:02.058 Cannot find device "nvmf_br" 00:17:02.058 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:17:02.058 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:02.058 Cannot find device "nvmf_init_if" 00:17:02.058 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:17:02.058 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:02.058 Cannot find device "nvmf_init_if2" 00:17:02.058 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:17:02.058 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:02.058 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:02.058 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:17:02.058 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:02.058 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:02.058 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:17:02.058 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:02.058 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:02.058 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:02.058 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:02.058 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:02.058 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:02.058 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:02.058 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:02.058 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:02.058 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:02.058 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:02.058 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:02.058 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:02.058 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:02.058 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:02.059 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:02.059 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:02.059 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:02.059 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:02.059 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:02.059 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:02.059 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:02.059 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:02.059 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:02.059 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:02.317 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:02.317 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:02.317 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:02.317 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:02.317 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:02.317 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:02.317 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:02.317 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:02.317 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:02.317 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:17:02.317 00:17:02.317 --- 10.0.0.3 ping statistics --- 00:17:02.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.317 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:17:02.317 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:02.317 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:02.317 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:17:02.317 00:17:02.317 --- 10.0.0.4 ping statistics --- 00:17:02.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.317 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:17:02.317 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:02.317 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:02.317 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:17:02.317 00:17:02.317 --- 10.0.0.1 ping statistics --- 00:17:02.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.317 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:17:02.317 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:02.317 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:02.317 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:17:02.317 00:17:02.317 --- 10.0.0.2 ping statistics --- 00:17:02.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.317 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:17:02.317 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:02.317 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:17:02.317 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:02.317 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:02.317 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:02.317 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:02.317 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:02.317 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:02.318 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:02.318 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:17:02.318 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:02.318 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:02.318 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.318 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=78470 00:17:02.318 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:17:02.318 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 78470 00:17:02.318 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 78470 ']' 00:17:02.318 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:02.318 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:02.318 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:02.318 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:02.318 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.576 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:02.576 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:17:02.576 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:02.576 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:02.576 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.836 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:02.836 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:17:02.836 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:17:02.836 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:02.836 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:02.836 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:02.836 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:17:02.836 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:17:02.836 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:02.836 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c2920e5b2d5abcd419106107049062cc 00:17:02.836 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:17:02.836 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Bgj 00:17:02.836 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c2920e5b2d5abcd419106107049062cc 0 00:17:02.836 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c2920e5b2d5abcd419106107049062cc 0 00:17:02.836 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:02.836 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:02.836 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c2920e5b2d5abcd419106107049062cc 00:17:02.836 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:17:02.836 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:02.836 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Bgj 00:17:02.836 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Bgj 00:17:02.836 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Bgj 00:17:02.836 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:17:02.836 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:02.836 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:02.836 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:02.836 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:17:02.836 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:17:02.836 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:02.836 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=613a16978728b2ec5da39f63f48ee1f2692dac8a6f7c5b824fab99a841914c30 00:17:02.836 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:17:02.836 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.ibg 00:17:02.836 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 613a16978728b2ec5da39f63f48ee1f2692dac8a6f7c5b824fab99a841914c30 3 00:17:02.836 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 613a16978728b2ec5da39f63f48ee1f2692dac8a6f7c5b824fab99a841914c30 3 00:17:02.836 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:02.836 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:02.836 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=613a16978728b2ec5da39f63f48ee1f2692dac8a6f7c5b824fab99a841914c30 00:17:02.836 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:17:02.836 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:02.836 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.ibg 00:17:02.836 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.ibg 00:17:02.836 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.ibg 00:17:02.836 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:17:02.836 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:02.836 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:02.836 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:02.836 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:17:02.836 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:17:02.836 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:02.836 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c1d8b888351170a9780012194b6b2beb5b4017248cd093ad 00:17:02.836 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:17:02.836 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.gzz 00:17:02.836 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c1d8b888351170a9780012194b6b2beb5b4017248cd093ad 0 00:17:02.836 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c1d8b888351170a9780012194b6b2beb5b4017248cd093ad 0 00:17:02.836 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:02.836 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:02.836 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c1d8b888351170a9780012194b6b2beb5b4017248cd093ad 00:17:02.836 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:17:02.836 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:02.836 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.gzz 00:17:02.836 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.gzz 00:17:02.836 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.gzz 00:17:02.836 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:17:02.836 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:02.836 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:02.837 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:02.837 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:17:02.837 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:17:02.837 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:02.837 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=933c898d219600df86c033b5c222e872739da70e117630da 00:17:02.837 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:17:02.837 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.OEi 00:17:02.837 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 933c898d219600df86c033b5c222e872739da70e117630da 2 00:17:02.837 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 933c898d219600df86c033b5c222e872739da70e117630da 2 00:17:02.837 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:02.837 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:02.837 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=933c898d219600df86c033b5c222e872739da70e117630da 00:17:02.837 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:17:02.837 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:03.099 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.OEi 00:17:03.099 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.OEi 00:17:03.099 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.OEi 00:17:03.099 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:17:03.099 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:03.099 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:03.099 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:03.099 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:17:03.099 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:17:03.099 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:03.099 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1193cb7086c5fac067ef8dceb9c69d9e 00:17:03.099 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:17:03.099 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.gLC 00:17:03.099 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1193cb7086c5fac067ef8dceb9c69d9e 1 00:17:03.099 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1193cb7086c5fac067ef8dceb9c69d9e 1 00:17:03.099 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:03.099 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:03.099 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1193cb7086c5fac067ef8dceb9c69d9e 00:17:03.099 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:17:03.099 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:03.099 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.gLC 00:17:03.099 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.gLC 00:17:03.099 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.gLC 00:17:03.099 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:17:03.099 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:03.099 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:03.099 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:03.099 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:17:03.099 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:17:03.099 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:03.099 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ea5841d991f442ce9e1e3eab667c5678 00:17:03.099 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:17:03.099 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.JH2 00:17:03.099 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ea5841d991f442ce9e1e3eab667c5678 1 00:17:03.099 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ea5841d991f442ce9e1e3eab667c5678 1 00:17:03.099 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:03.099 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:03.099 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ea5841d991f442ce9e1e3eab667c5678 00:17:03.099 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:17:03.099 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:03.099 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.JH2 00:17:03.099 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.JH2 00:17:03.099 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.JH2 00:17:03.099 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:17:03.099 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:03.099 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:03.099 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:03.099 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:17:03.099 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:17:03.099 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:03.099 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c4b528f5677b84df1fb3a896e9cbf10ebf692e853c75c2e2 00:17:03.099 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:17:03.099 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.dDq 00:17:03.099 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c4b528f5677b84df1fb3a896e9cbf10ebf692e853c75c2e2 2 00:17:03.099 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c4b528f5677b84df1fb3a896e9cbf10ebf692e853c75c2e2 2 00:17:03.099 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:03.099 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:03.099 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c4b528f5677b84df1fb3a896e9cbf10ebf692e853c75c2e2 00:17:03.099 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:17:03.099 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:03.099 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.dDq 00:17:03.099 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.dDq 00:17:03.099 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.dDq 00:17:03.099 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:17:03.099 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:03.099 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:03.099 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:03.099 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:17:03.099 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:17:03.099 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:03.100 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=752e48b040099ecfa3ce01fd80f88177 00:17:03.100 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:17:03.100 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.FsI 00:17:03.100 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 752e48b040099ecfa3ce01fd80f88177 0 00:17:03.100 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 752e48b040099ecfa3ce01fd80f88177 0 00:17:03.100 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:03.100 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:03.100 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=752e48b040099ecfa3ce01fd80f88177 00:17:03.100 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:17:03.100 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:03.100 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.FsI 00:17:03.100 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.FsI 00:17:03.100 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.FsI 00:17:03.100 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:17:03.100 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:03.100 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:03.100 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:03.100 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:17:03.100 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:17:03.100 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:03.100 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=121dbc851627ab47b525a0cf88d32195a8fb04ebfd62b350cbf1d51e96703658 00:17:03.100 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:17:03.359 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.UEQ 00:17:03.359 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 121dbc851627ab47b525a0cf88d32195a8fb04ebfd62b350cbf1d51e96703658 3 00:17:03.359 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 121dbc851627ab47b525a0cf88d32195a8fb04ebfd62b350cbf1d51e96703658 3 00:17:03.359 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:03.359 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:03.359 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=121dbc851627ab47b525a0cf88d32195a8fb04ebfd62b350cbf1d51e96703658 00:17:03.359 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:17:03.359 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:03.359 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.UEQ 00:17:03.359 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.UEQ 00:17:03.359 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.UEQ 00:17:03.359 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:17:03.359 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 78470 00:17:03.359 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 78470 ']' 00:17:03.359 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:03.359 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:03.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:03.359 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:03.359 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:03.359 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.621 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:03.621 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:17:03.621 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:03.621 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Bgj 00:17:03.621 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.621 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.621 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.621 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.ibg ]] 00:17:03.621 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ibg 00:17:03.621 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.621 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.621 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.621 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:03.621 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.gzz 00:17:03.621 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.621 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.621 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.621 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.OEi ]] 00:17:03.621 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.OEi 00:17:03.621 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.621 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.621 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.621 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:03.621 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.gLC 00:17:03.621 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.621 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.621 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.621 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.JH2 ]] 00:17:03.621 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.JH2 00:17:03.621 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.621 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.621 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.621 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:03.621 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.dDq 00:17:03.621 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.621 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.621 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.621 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.FsI ]] 00:17:03.621 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.FsI 00:17:03.621 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.621 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.621 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.621 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:03.621 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.UEQ 00:17:03.621 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.621 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.621 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.621 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:17:03.621 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:17:03.621 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:17:03.621 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:03.621 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:03.621 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:03.621 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:03.621 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:03.621 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:03.621 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:03.621 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:03.621 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:03.621 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:03.621 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:17:03.621 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:17:03.621 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:17:03.621 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:03.621 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:03.621 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:17:03.621 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:17:03.621 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:17:03.621 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:17:03.621 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:17:03.621 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:04.200 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:04.200 Waiting for block devices as requested 00:17:04.200 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:04.200 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:04.767 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:17:04.767 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:17:04.767 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:17:04.767 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:17:04.767 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:17:04.767 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:17:04.767 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:17:04.767 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:17:04.767 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:17:04.767 No valid GPT data, bailing 00:17:04.767 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:17:04.767 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:17:04.767 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:17:04.767 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:17:04.767 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:17:04.767 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:17:04.767 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:17:04.767 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:17:04.767 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:17:04.767 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:17:04.767 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:17:04.767 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:17:04.768 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:17:04.768 No valid GPT data, bailing 00:17:04.768 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:17:04.768 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:17:04.768 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:17:04.768 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:17:04.768 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:17:04.768 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:17:04.768 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:17:04.768 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:17:04.768 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:17:04.768 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:17:04.768 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:17:04.768 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:17:04.768 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:17:05.026 No valid GPT data, bailing 00:17:05.026 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:17:05.026 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:17:05.026 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:17:05.026 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:17:05.026 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:17:05.026 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:17:05.026 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:17:05.026 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:17:05.026 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:17:05.026 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:17:05.026 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:17:05.026 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:17:05.026 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:17:05.026 No valid GPT data, bailing 00:17:05.026 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:17:05.026 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:17:05.026 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:17:05.026 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:17:05.026 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:17:05.026 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:05.026 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:05.026 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:17:05.026 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:17:05.026 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:17:05.026 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:17:05.026 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:17:05.026 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:17:05.026 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:17:05.026 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:17:05.026 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:17:05.027 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:17:05.027 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid=b4733420-cf17-49bc-adb6-f89fe6fa7a33 -a 10.0.0.1 -t tcp -s 4420 00:17:05.027 00:17:05.027 Discovery Log Number of Records 2, Generation counter 2 00:17:05.027 =====Discovery Log Entry 0====== 00:17:05.027 trtype: tcp 00:17:05.027 adrfam: ipv4 00:17:05.027 subtype: current discovery subsystem 00:17:05.027 treq: not specified, sq flow control disable supported 00:17:05.027 portid: 1 00:17:05.027 trsvcid: 4420 00:17:05.027 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:05.027 traddr: 10.0.0.1 00:17:05.027 eflags: none 00:17:05.027 sectype: none 00:17:05.027 =====Discovery Log Entry 1====== 00:17:05.027 trtype: tcp 00:17:05.027 adrfam: ipv4 00:17:05.027 subtype: nvme subsystem 00:17:05.027 treq: not specified, sq flow control disable supported 00:17:05.027 portid: 1 00:17:05.027 trsvcid: 4420 00:17:05.027 subnqn: nqn.2024-02.io.spdk:cnode0 00:17:05.027 traddr: 10.0.0.1 00:17:05.027 eflags: none 00:17:05.027 sectype: none 00:17:05.027 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:05.027 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:17:05.027 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:17:05.027 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:05.027 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:05.027 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:05.027 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:05.027 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:05.027 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFkOGI4ODgzNTExNzBhOTc4MDAxMjE5NGI2YjJiZWI1YjQwMTcyNDhjZDA5M2FkOiUuSw==: 00:17:05.027 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTMzYzg5OGQyMTk2MDBkZjg2YzAzM2I1YzIyMmU4NzI3MzlkYTcwZTExNzYzMGRh1cPc5Q==: 00:17:05.027 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:05.027 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:05.286 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFkOGI4ODgzNTExNzBhOTc4MDAxMjE5NGI2YjJiZWI1YjQwMTcyNDhjZDA5M2FkOiUuSw==: 00:17:05.286 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTMzYzg5OGQyMTk2MDBkZjg2YzAzM2I1YzIyMmU4NzI3MzlkYTcwZTExNzYzMGRh1cPc5Q==: ]] 00:17:05.286 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTMzYzg5OGQyMTk2MDBkZjg2YzAzM2I1YzIyMmU4NzI3MzlkYTcwZTExNzYzMGRh1cPc5Q==: 00:17:05.286 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:17:05.286 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:17:05.286 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:17:05.286 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:05.286 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:17:05.286 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:05.286 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:17:05.286 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:05.286 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:05.286 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:05.286 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:05.286 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.286 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.286 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.286 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:05.286 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:05.286 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:05.286 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:05.286 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:05.286 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:05.286 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:05.286 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:05.286 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:05.286 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:05.286 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:05.286 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.286 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.286 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.286 nvme0n1 00:17:05.286 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.286 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:05.286 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.286 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.286 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:05.286 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.286 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.286 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:05.286 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.286 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.286 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.286 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:05.286 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:05.286 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:05.286 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:17:05.286 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:05.286 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:05.286 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:05.286 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:05.286 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzI5MjBlNWIyZDVhYmNkNDE5MTA2MTA3MDQ5MDYyY2Pxor+y: 00:17:05.286 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjEzYTE2OTc4NzI4YjJlYzVkYTM5ZjYzZjQ4ZWUxZjI2OTJkYWM4YTZmN2M1YjgyNGZhYjk5YTg0MTkxNGMzMNVgPYw=: 00:17:05.286 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:05.286 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:05.286 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzI5MjBlNWIyZDVhYmNkNDE5MTA2MTA3MDQ5MDYyY2Pxor+y: 00:17:05.286 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjEzYTE2OTc4NzI4YjJlYzVkYTM5ZjYzZjQ4ZWUxZjI2OTJkYWM4YTZmN2M1YjgyNGZhYjk5YTg0MTkxNGMzMNVgPYw=: ]] 00:17:05.286 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjEzYTE2OTc4NzI4YjJlYzVkYTM5ZjYzZjQ4ZWUxZjI2OTJkYWM4YTZmN2M1YjgyNGZhYjk5YTg0MTkxNGMzMNVgPYw=: 00:17:05.286 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:17:05.287 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:05.287 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:05.287 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:05.287 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:05.287 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:05.287 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:05.287 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.287 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.287 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.287 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:05.287 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:05.287 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:05.287 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:05.287 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:05.287 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:05.287 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:05.287 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:05.287 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:05.287 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:05.287 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:05.287 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.287 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.287 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.545 nvme0n1 00:17:05.545 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.545 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:05.545 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:05.545 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.545 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.545 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.545 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.545 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:05.545 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.545 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.545 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.545 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:05.545 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:05.545 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:05.545 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:05.545 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:05.545 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:05.545 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFkOGI4ODgzNTExNzBhOTc4MDAxMjE5NGI2YjJiZWI1YjQwMTcyNDhjZDA5M2FkOiUuSw==: 00:17:05.545 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTMzYzg5OGQyMTk2MDBkZjg2YzAzM2I1YzIyMmU4NzI3MzlkYTcwZTExNzYzMGRh1cPc5Q==: 00:17:05.545 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:05.545 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:05.546 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFkOGI4ODgzNTExNzBhOTc4MDAxMjE5NGI2YjJiZWI1YjQwMTcyNDhjZDA5M2FkOiUuSw==: 00:17:05.546 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTMzYzg5OGQyMTk2MDBkZjg2YzAzM2I1YzIyMmU4NzI3MzlkYTcwZTExNzYzMGRh1cPc5Q==: ]] 00:17:05.546 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTMzYzg5OGQyMTk2MDBkZjg2YzAzM2I1YzIyMmU4NzI3MzlkYTcwZTExNzYzMGRh1cPc5Q==: 00:17:05.546 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:17:05.546 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:05.546 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:05.546 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:05.546 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:05.546 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:05.546 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:05.546 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.546 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.546 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.546 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:05.546 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:05.546 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:05.546 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:05.546 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:05.546 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:05.546 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:05.546 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:05.546 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:05.546 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:05.546 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:05.546 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.546 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.546 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.805 nvme0n1 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTE5M2NiNzA4NmM1ZmFjMDY3ZWY4ZGNlYjljNjlkOWXxK34A: 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWE1ODQxZDk5MWY0NDJjZTllMWUzZWFiNjY3YzU2NzjSXE94: 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTE5M2NiNzA4NmM1ZmFjMDY3ZWY4ZGNlYjljNjlkOWXxK34A: 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWE1ODQxZDk5MWY0NDJjZTllMWUzZWFiNjY3YzU2NzjSXE94: ]] 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWE1ODQxZDk5MWY0NDJjZTllMWUzZWFiNjY3YzU2NzjSXE94: 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.805 nvme0n1 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzRiNTI4ZjU2NzdiODRkZjFmYjNhODk2ZTljYmYxMGViZjY5MmU4NTNjNzVjMmUyMMoqew==: 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzUyZTQ4YjA0MDA5OWVjZmEzY2UwMWZkODBmODgxNzdS0yi4: 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzRiNTI4ZjU2NzdiODRkZjFmYjNhODk2ZTljYmYxMGViZjY5MmU4NTNjNzVjMmUyMMoqew==: 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzUyZTQ4YjA0MDA5OWVjZmEzY2UwMWZkODBmODgxNzdS0yi4: ]] 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzUyZTQ4YjA0MDA5OWVjZmEzY2UwMWZkODBmODgxNzdS0yi4: 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:05.805 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:05.806 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:05.806 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.806 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.064 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.064 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:06.064 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:06.064 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:06.064 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:06.064 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:06.064 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:06.064 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:06.064 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:06.064 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:06.064 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:06.064 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:06.064 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:06.064 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.064 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.064 nvme0n1 00:17:06.064 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.064 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:06.064 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:06.064 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.064 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.064 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.064 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.064 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:06.064 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.064 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.064 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.064 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:06.064 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:17:06.064 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:06.064 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:06.064 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:06.064 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:06.064 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTIxZGJjODUxNjI3YWI0N2I1MjVhMGNmODhkMzIxOTVhOGZiMDRlYmZkNjJiMzUwY2JmMWQ1MWU5NjcwMzY1OHEnhLs=: 00:17:06.064 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:06.064 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:06.064 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:06.064 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTIxZGJjODUxNjI3YWI0N2I1MjVhMGNmODhkMzIxOTVhOGZiMDRlYmZkNjJiMzUwY2JmMWQ1MWU5NjcwMzY1OHEnhLs=: 00:17:06.064 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:06.064 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:17:06.064 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:06.064 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:06.064 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:06.064 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:06.064 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:06.064 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:06.064 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.064 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.064 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.064 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:06.064 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:06.064 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:06.064 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:06.064 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:06.064 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:06.064 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:06.064 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:06.064 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:06.064 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:06.064 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:06.065 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:06.065 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.065 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.324 nvme0n1 00:17:06.324 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.324 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:06.324 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:06.324 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.324 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.324 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.324 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.324 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:06.324 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.324 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.324 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.324 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:06.324 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:06.324 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:17:06.324 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:06.324 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:06.324 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:06.324 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:06.324 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzI5MjBlNWIyZDVhYmNkNDE5MTA2MTA3MDQ5MDYyY2Pxor+y: 00:17:06.324 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjEzYTE2OTc4NzI4YjJlYzVkYTM5ZjYzZjQ4ZWUxZjI2OTJkYWM4YTZmN2M1YjgyNGZhYjk5YTg0MTkxNGMzMNVgPYw=: 00:17:06.324 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:06.324 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:06.583 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzI5MjBlNWIyZDVhYmNkNDE5MTA2MTA3MDQ5MDYyY2Pxor+y: 00:17:06.583 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjEzYTE2OTc4NzI4YjJlYzVkYTM5ZjYzZjQ4ZWUxZjI2OTJkYWM4YTZmN2M1YjgyNGZhYjk5YTg0MTkxNGMzMNVgPYw=: ]] 00:17:06.583 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjEzYTE2OTc4NzI4YjJlYzVkYTM5ZjYzZjQ4ZWUxZjI2OTJkYWM4YTZmN2M1YjgyNGZhYjk5YTg0MTkxNGMzMNVgPYw=: 00:17:06.583 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:17:06.583 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:06.583 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:06.583 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:06.583 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:06.583 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:06.583 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:06.583 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.583 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.583 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.584 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:06.584 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:06.584 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:06.584 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:06.584 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:06.584 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:06.584 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:06.584 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:06.584 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:06.584 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:06.584 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:06.584 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.584 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.584 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.843 nvme0n1 00:17:06.843 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.843 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:06.843 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:06.843 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.843 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.843 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.843 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.843 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:06.843 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.843 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.843 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.843 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:06.843 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:17:06.843 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:06.843 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:06.843 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:06.843 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:06.843 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFkOGI4ODgzNTExNzBhOTc4MDAxMjE5NGI2YjJiZWI1YjQwMTcyNDhjZDA5M2FkOiUuSw==: 00:17:06.843 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTMzYzg5OGQyMTk2MDBkZjg2YzAzM2I1YzIyMmU4NzI3MzlkYTcwZTExNzYzMGRh1cPc5Q==: 00:17:06.843 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:06.843 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:06.843 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFkOGI4ODgzNTExNzBhOTc4MDAxMjE5NGI2YjJiZWI1YjQwMTcyNDhjZDA5M2FkOiUuSw==: 00:17:06.843 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTMzYzg5OGQyMTk2MDBkZjg2YzAzM2I1YzIyMmU4NzI3MzlkYTcwZTExNzYzMGRh1cPc5Q==: ]] 00:17:06.843 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTMzYzg5OGQyMTk2MDBkZjg2YzAzM2I1YzIyMmU4NzI3MzlkYTcwZTExNzYzMGRh1cPc5Q==: 00:17:06.843 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:17:06.843 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:06.843 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:06.844 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:06.844 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:06.844 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:06.844 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:06.844 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.844 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.844 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.844 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:06.844 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:06.844 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:06.844 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:06.844 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:06.844 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:06.844 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:06.844 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:06.844 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:06.844 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:06.844 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:06.844 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.844 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.844 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.103 nvme0n1 00:17:07.103 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.103 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:07.103 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:07.103 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.103 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.103 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.103 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.103 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:07.103 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.103 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.103 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.103 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:07.103 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:17:07.103 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:07.103 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:07.103 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:07.103 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:07.103 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTE5M2NiNzA4NmM1ZmFjMDY3ZWY4ZGNlYjljNjlkOWXxK34A: 00:17:07.103 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWE1ODQxZDk5MWY0NDJjZTllMWUzZWFiNjY3YzU2NzjSXE94: 00:17:07.103 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:07.103 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:07.103 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTE5M2NiNzA4NmM1ZmFjMDY3ZWY4ZGNlYjljNjlkOWXxK34A: 00:17:07.103 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWE1ODQxZDk5MWY0NDJjZTllMWUzZWFiNjY3YzU2NzjSXE94: ]] 00:17:07.103 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWE1ODQxZDk5MWY0NDJjZTllMWUzZWFiNjY3YzU2NzjSXE94: 00:17:07.103 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:17:07.103 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:07.103 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:07.103 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:07.103 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:07.103 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:07.103 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:07.103 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.103 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.103 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.103 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:07.103 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:07.103 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:07.103 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:07.103 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:07.103 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:07.103 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:07.103 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:07.103 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:07.103 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:07.103 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:07.103 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.103 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.103 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.103 nvme0n1 00:17:07.103 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.103 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:07.103 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:07.103 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.103 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.103 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.103 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.103 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:07.103 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.103 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.363 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.363 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:07.363 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:17:07.363 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:07.363 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:07.363 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:07.363 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:07.363 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzRiNTI4ZjU2NzdiODRkZjFmYjNhODk2ZTljYmYxMGViZjY5MmU4NTNjNzVjMmUyMMoqew==: 00:17:07.363 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzUyZTQ4YjA0MDA5OWVjZmEzY2UwMWZkODBmODgxNzdS0yi4: 00:17:07.363 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:07.363 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:07.363 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzRiNTI4ZjU2NzdiODRkZjFmYjNhODk2ZTljYmYxMGViZjY5MmU4NTNjNzVjMmUyMMoqew==: 00:17:07.363 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzUyZTQ4YjA0MDA5OWVjZmEzY2UwMWZkODBmODgxNzdS0yi4: ]] 00:17:07.363 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzUyZTQ4YjA0MDA5OWVjZmEzY2UwMWZkODBmODgxNzdS0yi4: 00:17:07.363 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:17:07.363 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:07.363 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:07.363 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:07.363 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:07.363 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:07.363 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:07.363 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.363 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.363 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.363 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:07.363 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:07.363 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:07.363 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:07.363 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:07.363 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:07.364 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:07.364 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:07.364 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:07.364 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:07.364 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:07.364 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:07.364 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.364 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.364 nvme0n1 00:17:07.364 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.364 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:07.364 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:07.364 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.364 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.364 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.364 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.364 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:07.364 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.364 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.364 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.364 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:07.364 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:17:07.364 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:07.364 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:07.364 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:07.364 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:07.364 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTIxZGJjODUxNjI3YWI0N2I1MjVhMGNmODhkMzIxOTVhOGZiMDRlYmZkNjJiMzUwY2JmMWQ1MWU5NjcwMzY1OHEnhLs=: 00:17:07.364 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:07.364 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:07.364 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:07.364 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTIxZGJjODUxNjI3YWI0N2I1MjVhMGNmODhkMzIxOTVhOGZiMDRlYmZkNjJiMzUwY2JmMWQ1MWU5NjcwMzY1OHEnhLs=: 00:17:07.364 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:07.364 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:17:07.364 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:07.364 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:07.364 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:07.364 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:07.364 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:07.364 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:07.364 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.364 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.364 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.364 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:07.364 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:07.364 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:07.364 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:07.364 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:07.364 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:07.364 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:07.364 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:07.364 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:07.364 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:07.364 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:07.364 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:07.364 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.364 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.623 nvme0n1 00:17:07.623 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.623 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:07.623 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:07.623 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.623 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.623 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.623 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.623 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:07.623 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.623 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.623 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.623 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:07.623 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:07.623 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:17:07.623 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:07.623 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:07.623 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:07.623 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:07.623 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzI5MjBlNWIyZDVhYmNkNDE5MTA2MTA3MDQ5MDYyY2Pxor+y: 00:17:07.623 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjEzYTE2OTc4NzI4YjJlYzVkYTM5ZjYzZjQ4ZWUxZjI2OTJkYWM4YTZmN2M1YjgyNGZhYjk5YTg0MTkxNGMzMNVgPYw=: 00:17:07.623 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:07.623 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:08.192 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzI5MjBlNWIyZDVhYmNkNDE5MTA2MTA3MDQ5MDYyY2Pxor+y: 00:17:08.192 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjEzYTE2OTc4NzI4YjJlYzVkYTM5ZjYzZjQ4ZWUxZjI2OTJkYWM4YTZmN2M1YjgyNGZhYjk5YTg0MTkxNGMzMNVgPYw=: ]] 00:17:08.192 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjEzYTE2OTc4NzI4YjJlYzVkYTM5ZjYzZjQ4ZWUxZjI2OTJkYWM4YTZmN2M1YjgyNGZhYjk5YTg0MTkxNGMzMNVgPYw=: 00:17:08.192 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:17:08.192 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:08.192 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:08.192 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:08.192 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:08.192 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:08.192 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:08.192 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.192 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.192 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.192 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:08.192 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:08.192 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:08.192 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:08.192 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:08.192 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:08.192 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:08.192 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:08.192 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:08.192 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:08.192 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:08.192 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.192 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.192 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.452 nvme0n1 00:17:08.452 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.452 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:08.452 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.452 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.452 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:08.452 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.452 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.452 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:08.452 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.452 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.452 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.452 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:08.452 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:17:08.452 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:08.452 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:08.452 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:08.452 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:08.452 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFkOGI4ODgzNTExNzBhOTc4MDAxMjE5NGI2YjJiZWI1YjQwMTcyNDhjZDA5M2FkOiUuSw==: 00:17:08.452 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTMzYzg5OGQyMTk2MDBkZjg2YzAzM2I1YzIyMmU4NzI3MzlkYTcwZTExNzYzMGRh1cPc5Q==: 00:17:08.452 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:08.452 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:08.452 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFkOGI4ODgzNTExNzBhOTc4MDAxMjE5NGI2YjJiZWI1YjQwMTcyNDhjZDA5M2FkOiUuSw==: 00:17:08.452 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTMzYzg5OGQyMTk2MDBkZjg2YzAzM2I1YzIyMmU4NzI3MzlkYTcwZTExNzYzMGRh1cPc5Q==: ]] 00:17:08.452 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTMzYzg5OGQyMTk2MDBkZjg2YzAzM2I1YzIyMmU4NzI3MzlkYTcwZTExNzYzMGRh1cPc5Q==: 00:17:08.452 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:17:08.452 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:08.452 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:08.452 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:08.452 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:08.452 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:08.452 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:08.452 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.452 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.452 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.452 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:08.452 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:08.452 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:08.452 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:08.452 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:08.452 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:08.452 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:08.452 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:08.453 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:08.453 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:08.453 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:08.453 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.453 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.453 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.712 nvme0n1 00:17:08.712 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.712 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:08.712 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:08.712 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.712 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.712 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.712 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.712 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:08.712 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.712 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.712 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.712 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:08.712 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:17:08.712 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:08.712 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:08.713 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:08.713 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:08.713 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTE5M2NiNzA4NmM1ZmFjMDY3ZWY4ZGNlYjljNjlkOWXxK34A: 00:17:08.713 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWE1ODQxZDk5MWY0NDJjZTllMWUzZWFiNjY3YzU2NzjSXE94: 00:17:08.713 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:08.713 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:08.713 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTE5M2NiNzA4NmM1ZmFjMDY3ZWY4ZGNlYjljNjlkOWXxK34A: 00:17:08.713 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWE1ODQxZDk5MWY0NDJjZTllMWUzZWFiNjY3YzU2NzjSXE94: ]] 00:17:08.713 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWE1ODQxZDk5MWY0NDJjZTllMWUzZWFiNjY3YzU2NzjSXE94: 00:17:08.713 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:17:08.713 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:08.713 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:08.713 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:08.713 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:08.713 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:08.713 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:08.713 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.713 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.713 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.972 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:08.972 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:08.972 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:08.972 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:08.972 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:08.972 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:08.972 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:08.972 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:08.972 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:08.972 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:08.972 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:08.972 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.972 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.972 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.972 nvme0n1 00:17:08.972 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.972 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:08.972 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:08.972 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.972 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.972 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.972 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.972 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:08.972 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.972 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.972 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.972 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:08.972 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:17:08.972 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:08.972 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:08.972 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:08.972 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:08.972 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzRiNTI4ZjU2NzdiODRkZjFmYjNhODk2ZTljYmYxMGViZjY5MmU4NTNjNzVjMmUyMMoqew==: 00:17:08.972 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzUyZTQ4YjA0MDA5OWVjZmEzY2UwMWZkODBmODgxNzdS0yi4: 00:17:08.972 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:08.972 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:08.972 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzRiNTI4ZjU2NzdiODRkZjFmYjNhODk2ZTljYmYxMGViZjY5MmU4NTNjNzVjMmUyMMoqew==: 00:17:08.972 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzUyZTQ4YjA0MDA5OWVjZmEzY2UwMWZkODBmODgxNzdS0yi4: ]] 00:17:08.972 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzUyZTQ4YjA0MDA5OWVjZmEzY2UwMWZkODBmODgxNzdS0yi4: 00:17:08.972 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:17:09.232 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:09.232 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:09.232 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:09.232 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:09.232 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:09.232 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:09.233 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.233 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.233 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.233 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:09.233 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:09.233 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:09.233 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:09.233 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:09.233 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:09.233 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:09.233 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:09.233 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:09.233 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:09.233 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:09.233 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:09.233 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.233 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.233 nvme0n1 00:17:09.233 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.233 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:09.233 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:09.233 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.233 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.233 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.492 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.492 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:09.492 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.492 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.492 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.493 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:09.493 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:17:09.493 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:09.493 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:09.493 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:09.493 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:09.493 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTIxZGJjODUxNjI3YWI0N2I1MjVhMGNmODhkMzIxOTVhOGZiMDRlYmZkNjJiMzUwY2JmMWQ1MWU5NjcwMzY1OHEnhLs=: 00:17:09.493 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:09.493 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:09.493 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:09.493 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTIxZGJjODUxNjI3YWI0N2I1MjVhMGNmODhkMzIxOTVhOGZiMDRlYmZkNjJiMzUwY2JmMWQ1MWU5NjcwMzY1OHEnhLs=: 00:17:09.493 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:09.493 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:17:09.493 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:09.493 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:09.493 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:09.493 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:09.493 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:09.493 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:09.493 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.493 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.493 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.493 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:09.493 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:09.493 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:09.493 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:09.493 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:09.493 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:09.493 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:09.493 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:09.493 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:09.493 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:09.493 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:09.493 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:09.493 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.493 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.493 nvme0n1 00:17:09.493 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.493 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:09.493 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.493 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.493 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:09.493 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.752 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.752 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:09.752 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.752 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.752 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.752 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:09.752 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:09.752 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:17:09.752 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:09.752 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:09.752 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:09.752 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:09.752 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzI5MjBlNWIyZDVhYmNkNDE5MTA2MTA3MDQ5MDYyY2Pxor+y: 00:17:09.752 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjEzYTE2OTc4NzI4YjJlYzVkYTM5ZjYzZjQ4ZWUxZjI2OTJkYWM4YTZmN2M1YjgyNGZhYjk5YTg0MTkxNGMzMNVgPYw=: 00:17:09.752 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:09.752 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:11.654 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzI5MjBlNWIyZDVhYmNkNDE5MTA2MTA3MDQ5MDYyY2Pxor+y: 00:17:11.654 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjEzYTE2OTc4NzI4YjJlYzVkYTM5ZjYzZjQ4ZWUxZjI2OTJkYWM4YTZmN2M1YjgyNGZhYjk5YTg0MTkxNGMzMNVgPYw=: ]] 00:17:11.654 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjEzYTE2OTc4NzI4YjJlYzVkYTM5ZjYzZjQ4ZWUxZjI2OTJkYWM4YTZmN2M1YjgyNGZhYjk5YTg0MTkxNGMzMNVgPYw=: 00:17:11.654 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:17:11.654 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:11.654 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:11.654 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:11.654 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:11.654 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:11.654 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:11.654 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.654 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.654 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.654 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:11.654 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:11.654 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:11.654 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:11.654 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:11.654 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:11.654 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:11.654 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:11.654 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:11.654 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:11.654 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:11.654 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.654 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.654 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.654 nvme0n1 00:17:11.654 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.654 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:11.654 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.654 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.654 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:11.654 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.913 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.913 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:11.913 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.913 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.913 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.913 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:11.914 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:17:11.914 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:11.914 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:11.914 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:11.914 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:11.914 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFkOGI4ODgzNTExNzBhOTc4MDAxMjE5NGI2YjJiZWI1YjQwMTcyNDhjZDA5M2FkOiUuSw==: 00:17:11.914 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTMzYzg5OGQyMTk2MDBkZjg2YzAzM2I1YzIyMmU4NzI3MzlkYTcwZTExNzYzMGRh1cPc5Q==: 00:17:11.914 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:11.914 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:11.914 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFkOGI4ODgzNTExNzBhOTc4MDAxMjE5NGI2YjJiZWI1YjQwMTcyNDhjZDA5M2FkOiUuSw==: 00:17:11.914 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTMzYzg5OGQyMTk2MDBkZjg2YzAzM2I1YzIyMmU4NzI3MzlkYTcwZTExNzYzMGRh1cPc5Q==: ]] 00:17:11.914 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTMzYzg5OGQyMTk2MDBkZjg2YzAzM2I1YzIyMmU4NzI3MzlkYTcwZTExNzYzMGRh1cPc5Q==: 00:17:11.914 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:17:11.914 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:11.914 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:11.914 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:11.914 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:11.914 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:11.914 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:11.914 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.914 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.914 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.914 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:11.914 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:11.914 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:11.914 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:11.914 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:11.914 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:11.914 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:11.914 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:11.914 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:11.914 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:11.914 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:11.914 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.914 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.914 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.173 nvme0n1 00:17:12.173 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.173 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:12.173 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:12.173 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.173 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.173 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.173 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.173 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:12.173 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.173 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.173 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.173 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:12.173 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:17:12.173 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:12.173 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:12.173 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:12.173 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:12.173 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTE5M2NiNzA4NmM1ZmFjMDY3ZWY4ZGNlYjljNjlkOWXxK34A: 00:17:12.173 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWE1ODQxZDk5MWY0NDJjZTllMWUzZWFiNjY3YzU2NzjSXE94: 00:17:12.173 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:12.174 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:12.174 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTE5M2NiNzA4NmM1ZmFjMDY3ZWY4ZGNlYjljNjlkOWXxK34A: 00:17:12.174 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWE1ODQxZDk5MWY0NDJjZTllMWUzZWFiNjY3YzU2NzjSXE94: ]] 00:17:12.174 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWE1ODQxZDk5MWY0NDJjZTllMWUzZWFiNjY3YzU2NzjSXE94: 00:17:12.174 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:17:12.174 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:12.174 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:12.174 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:12.174 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:12.174 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:12.174 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:12.174 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.174 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.174 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.174 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:12.174 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:12.174 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:12.174 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:12.174 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:12.174 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:12.174 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:12.174 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:12.174 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:12.174 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:12.174 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:12.174 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:12.174 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.174 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.740 nvme0n1 00:17:12.740 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.740 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:12.741 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:12.741 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.741 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.741 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.741 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.741 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:12.741 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.741 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.741 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.741 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:12.741 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:17:12.741 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:12.741 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:12.741 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:12.741 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:12.741 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzRiNTI4ZjU2NzdiODRkZjFmYjNhODk2ZTljYmYxMGViZjY5MmU4NTNjNzVjMmUyMMoqew==: 00:17:12.741 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzUyZTQ4YjA0MDA5OWVjZmEzY2UwMWZkODBmODgxNzdS0yi4: 00:17:12.741 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:12.741 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:12.741 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzRiNTI4ZjU2NzdiODRkZjFmYjNhODk2ZTljYmYxMGViZjY5MmU4NTNjNzVjMmUyMMoqew==: 00:17:12.741 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzUyZTQ4YjA0MDA5OWVjZmEzY2UwMWZkODBmODgxNzdS0yi4: ]] 00:17:12.741 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzUyZTQ4YjA0MDA5OWVjZmEzY2UwMWZkODBmODgxNzdS0yi4: 00:17:12.741 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:17:12.741 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:12.741 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:12.741 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:12.741 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:12.741 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:12.741 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:12.741 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.741 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.741 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.741 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:12.741 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:12.741 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:12.741 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:12.741 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:12.741 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:12.741 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:12.741 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:12.741 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:12.741 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:12.741 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:12.741 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:12.741 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.741 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.001 nvme0n1 00:17:13.002 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.002 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:13.002 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.002 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:13.002 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.002 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.002 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.002 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:13.002 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.002 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.261 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.261 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:13.261 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:17:13.261 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:13.261 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:13.261 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:13.261 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:13.261 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTIxZGJjODUxNjI3YWI0N2I1MjVhMGNmODhkMzIxOTVhOGZiMDRlYmZkNjJiMzUwY2JmMWQ1MWU5NjcwMzY1OHEnhLs=: 00:17:13.261 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:13.261 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:13.261 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:13.261 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTIxZGJjODUxNjI3YWI0N2I1MjVhMGNmODhkMzIxOTVhOGZiMDRlYmZkNjJiMzUwY2JmMWQ1MWU5NjcwMzY1OHEnhLs=: 00:17:13.261 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:13.261 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:17:13.261 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:13.261 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:13.261 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:13.261 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:13.261 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:13.261 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:13.261 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.261 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.261 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.261 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:13.261 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:13.261 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:13.261 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:13.261 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:13.261 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:13.261 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:13.261 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:13.261 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:13.261 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:13.261 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:13.261 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:13.261 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.261 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.520 nvme0n1 00:17:13.520 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.520 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:13.521 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:13.521 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.521 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.521 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.521 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.521 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:13.521 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.521 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.521 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.521 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:13.521 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:13.521 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:17:13.521 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:13.521 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:13.521 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:13.521 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:13.521 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzI5MjBlNWIyZDVhYmNkNDE5MTA2MTA3MDQ5MDYyY2Pxor+y: 00:17:13.521 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjEzYTE2OTc4NzI4YjJlYzVkYTM5ZjYzZjQ4ZWUxZjI2OTJkYWM4YTZmN2M1YjgyNGZhYjk5YTg0MTkxNGMzMNVgPYw=: 00:17:13.521 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:13.521 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:13.521 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzI5MjBlNWIyZDVhYmNkNDE5MTA2MTA3MDQ5MDYyY2Pxor+y: 00:17:13.521 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjEzYTE2OTc4NzI4YjJlYzVkYTM5ZjYzZjQ4ZWUxZjI2OTJkYWM4YTZmN2M1YjgyNGZhYjk5YTg0MTkxNGMzMNVgPYw=: ]] 00:17:13.521 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjEzYTE2OTc4NzI4YjJlYzVkYTM5ZjYzZjQ4ZWUxZjI2OTJkYWM4YTZmN2M1YjgyNGZhYjk5YTg0MTkxNGMzMNVgPYw=: 00:17:13.521 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:17:13.521 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:13.521 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:13.521 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:13.521 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:13.521 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:13.521 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:13.521 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.521 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.521 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.521 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:13.521 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:13.521 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:13.521 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:13.521 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:13.521 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:13.521 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:13.521 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:13.521 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:13.521 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:13.521 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:13.521 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.521 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.521 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.088 nvme0n1 00:17:14.088 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.088 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:14.088 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:14.088 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.088 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.347 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.347 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.347 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:14.347 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.347 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.347 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.347 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:14.347 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:17:14.347 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:14.347 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:14.347 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:14.347 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:14.347 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFkOGI4ODgzNTExNzBhOTc4MDAxMjE5NGI2YjJiZWI1YjQwMTcyNDhjZDA5M2FkOiUuSw==: 00:17:14.347 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTMzYzg5OGQyMTk2MDBkZjg2YzAzM2I1YzIyMmU4NzI3MzlkYTcwZTExNzYzMGRh1cPc5Q==: 00:17:14.347 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:14.347 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:14.347 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFkOGI4ODgzNTExNzBhOTc4MDAxMjE5NGI2YjJiZWI1YjQwMTcyNDhjZDA5M2FkOiUuSw==: 00:17:14.347 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTMzYzg5OGQyMTk2MDBkZjg2YzAzM2I1YzIyMmU4NzI3MzlkYTcwZTExNzYzMGRh1cPc5Q==: ]] 00:17:14.347 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTMzYzg5OGQyMTk2MDBkZjg2YzAzM2I1YzIyMmU4NzI3MzlkYTcwZTExNzYzMGRh1cPc5Q==: 00:17:14.347 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:17:14.347 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:14.347 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:14.347 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:14.347 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:14.347 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:14.347 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:14.347 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.347 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.347 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.347 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:14.347 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:14.347 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:14.347 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:14.347 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:14.347 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:14.347 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:14.347 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:14.347 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:14.347 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:14.347 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:14.347 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.347 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.347 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.915 nvme0n1 00:17:14.915 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.915 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:14.915 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:14.915 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.915 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.915 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.915 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.915 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:14.915 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.915 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.915 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.915 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:14.915 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:17:14.915 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:14.915 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:14.915 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:14.915 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:14.915 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTE5M2NiNzA4NmM1ZmFjMDY3ZWY4ZGNlYjljNjlkOWXxK34A: 00:17:14.915 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWE1ODQxZDk5MWY0NDJjZTllMWUzZWFiNjY3YzU2NzjSXE94: 00:17:14.915 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:14.915 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:14.915 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTE5M2NiNzA4NmM1ZmFjMDY3ZWY4ZGNlYjljNjlkOWXxK34A: 00:17:14.915 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWE1ODQxZDk5MWY0NDJjZTllMWUzZWFiNjY3YzU2NzjSXE94: ]] 00:17:14.915 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWE1ODQxZDk5MWY0NDJjZTllMWUzZWFiNjY3YzU2NzjSXE94: 00:17:14.915 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:17:14.915 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:14.915 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:14.915 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:14.915 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:14.915 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:14.915 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:14.915 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.915 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.915 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.915 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:14.915 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:14.915 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:14.915 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:14.915 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:14.915 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:14.915 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:14.915 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:14.915 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:14.915 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:14.915 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:14.915 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.915 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.915 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.482 nvme0n1 00:17:15.482 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.482 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:15.482 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.482 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:15.482 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.482 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.741 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.741 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:15.741 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.741 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.741 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.741 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:15.741 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:17:15.741 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:15.741 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:15.741 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:15.741 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:15.741 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzRiNTI4ZjU2NzdiODRkZjFmYjNhODk2ZTljYmYxMGViZjY5MmU4NTNjNzVjMmUyMMoqew==: 00:17:15.741 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzUyZTQ4YjA0MDA5OWVjZmEzY2UwMWZkODBmODgxNzdS0yi4: 00:17:15.741 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:15.741 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:15.741 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzRiNTI4ZjU2NzdiODRkZjFmYjNhODk2ZTljYmYxMGViZjY5MmU4NTNjNzVjMmUyMMoqew==: 00:17:15.741 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzUyZTQ4YjA0MDA5OWVjZmEzY2UwMWZkODBmODgxNzdS0yi4: ]] 00:17:15.741 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzUyZTQ4YjA0MDA5OWVjZmEzY2UwMWZkODBmODgxNzdS0yi4: 00:17:15.741 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:17:15.741 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:15.741 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:15.741 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:15.741 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:15.741 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:15.741 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:15.741 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.741 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.741 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.741 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:15.741 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:15.741 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:15.741 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:15.742 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:15.742 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:15.742 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:15.742 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:15.742 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:15.742 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:15.742 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:15.742 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:15.742 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.742 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.310 nvme0n1 00:17:16.310 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.310 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:16.310 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:16.310 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.310 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.310 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.310 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.310 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:16.310 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.310 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.310 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.310 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:16.310 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:17:16.310 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:16.310 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:16.310 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:16.310 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:16.310 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTIxZGJjODUxNjI3YWI0N2I1MjVhMGNmODhkMzIxOTVhOGZiMDRlYmZkNjJiMzUwY2JmMWQ1MWU5NjcwMzY1OHEnhLs=: 00:17:16.310 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:16.310 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:16.310 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:16.310 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTIxZGJjODUxNjI3YWI0N2I1MjVhMGNmODhkMzIxOTVhOGZiMDRlYmZkNjJiMzUwY2JmMWQ1MWU5NjcwMzY1OHEnhLs=: 00:17:16.310 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:16.310 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:17:16.310 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:16.310 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:16.310 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:16.310 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:16.310 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:16.310 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:16.310 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.310 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.310 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.310 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:16.310 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:16.310 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:16.310 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:16.310 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:16.310 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:16.310 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:16.310 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:16.310 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:16.310 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:16.310 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:16.310 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:16.310 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.310 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.877 nvme0n1 00:17:16.877 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.877 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:16.877 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:16.877 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.877 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.877 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.136 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.136 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:17.136 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.136 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.136 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.136 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:17.136 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:17.136 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:17.136 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:17:17.136 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:17.136 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:17.136 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:17.136 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:17.136 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzI5MjBlNWIyZDVhYmNkNDE5MTA2MTA3MDQ5MDYyY2Pxor+y: 00:17:17.136 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjEzYTE2OTc4NzI4YjJlYzVkYTM5ZjYzZjQ4ZWUxZjI2OTJkYWM4YTZmN2M1YjgyNGZhYjk5YTg0MTkxNGMzMNVgPYw=: 00:17:17.136 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:17.136 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:17.136 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzI5MjBlNWIyZDVhYmNkNDE5MTA2MTA3MDQ5MDYyY2Pxor+y: 00:17:17.136 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjEzYTE2OTc4NzI4YjJlYzVkYTM5ZjYzZjQ4ZWUxZjI2OTJkYWM4YTZmN2M1YjgyNGZhYjk5YTg0MTkxNGMzMNVgPYw=: ]] 00:17:17.136 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjEzYTE2OTc4NzI4YjJlYzVkYTM5ZjYzZjQ4ZWUxZjI2OTJkYWM4YTZmN2M1YjgyNGZhYjk5YTg0MTkxNGMzMNVgPYw=: 00:17:17.136 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:17:17.136 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:17.136 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:17.136 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:17.136 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:17.136 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:17.136 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:17.136 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.136 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.136 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.136 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:17.136 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:17.136 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:17.136 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:17.136 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:17.137 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:17.137 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:17.137 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:17.137 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:17.137 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:17.137 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:17.137 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.137 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.137 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.137 nvme0n1 00:17:17.137 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.137 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:17.137 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.137 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:17.137 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.137 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.137 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.137 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:17.137 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.137 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.137 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.137 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:17.137 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:17:17.137 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:17.137 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:17.137 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:17.137 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:17.137 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFkOGI4ODgzNTExNzBhOTc4MDAxMjE5NGI2YjJiZWI1YjQwMTcyNDhjZDA5M2FkOiUuSw==: 00:17:17.137 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTMzYzg5OGQyMTk2MDBkZjg2YzAzM2I1YzIyMmU4NzI3MzlkYTcwZTExNzYzMGRh1cPc5Q==: 00:17:17.137 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:17.137 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:17.137 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFkOGI4ODgzNTExNzBhOTc4MDAxMjE5NGI2YjJiZWI1YjQwMTcyNDhjZDA5M2FkOiUuSw==: 00:17:17.137 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTMzYzg5OGQyMTk2MDBkZjg2YzAzM2I1YzIyMmU4NzI3MzlkYTcwZTExNzYzMGRh1cPc5Q==: ]] 00:17:17.137 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTMzYzg5OGQyMTk2MDBkZjg2YzAzM2I1YzIyMmU4NzI3MzlkYTcwZTExNzYzMGRh1cPc5Q==: 00:17:17.137 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:17:17.137 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:17.137 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:17.137 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:17.137 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:17.137 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:17.137 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:17.137 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.137 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.137 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.137 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:17.137 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:17.137 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:17.137 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:17.396 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:17.396 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:17.396 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:17.396 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:17.396 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:17.396 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:17.396 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:17.396 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.396 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.396 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.396 nvme0n1 00:17:17.396 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.396 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:17.396 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.396 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.396 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:17.396 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.396 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.396 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:17.396 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.396 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.396 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.396 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:17.396 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:17:17.396 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:17.397 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:17.397 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:17.397 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:17.397 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTE5M2NiNzA4NmM1ZmFjMDY3ZWY4ZGNlYjljNjlkOWXxK34A: 00:17:17.397 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWE1ODQxZDk5MWY0NDJjZTllMWUzZWFiNjY3YzU2NzjSXE94: 00:17:17.397 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:17.397 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:17.397 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTE5M2NiNzA4NmM1ZmFjMDY3ZWY4ZGNlYjljNjlkOWXxK34A: 00:17:17.397 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWE1ODQxZDk5MWY0NDJjZTllMWUzZWFiNjY3YzU2NzjSXE94: ]] 00:17:17.397 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWE1ODQxZDk5MWY0NDJjZTllMWUzZWFiNjY3YzU2NzjSXE94: 00:17:17.397 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:17:17.397 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:17.397 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:17.397 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:17.397 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:17.397 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:17.397 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:17.397 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.397 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.397 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.397 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:17.397 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:17.397 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:17.397 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:17.397 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:17.397 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:17.397 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:17.397 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:17.397 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:17.397 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:17.397 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:17.397 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.397 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.397 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.656 nvme0n1 00:17:17.656 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.656 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:17.656 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:17.657 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.657 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.657 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.657 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.657 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:17.657 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.657 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.657 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.657 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:17.657 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:17:17.657 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:17.657 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:17.657 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:17.657 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:17.657 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzRiNTI4ZjU2NzdiODRkZjFmYjNhODk2ZTljYmYxMGViZjY5MmU4NTNjNzVjMmUyMMoqew==: 00:17:17.657 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzUyZTQ4YjA0MDA5OWVjZmEzY2UwMWZkODBmODgxNzdS0yi4: 00:17:17.657 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:17.657 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:17.657 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzRiNTI4ZjU2NzdiODRkZjFmYjNhODk2ZTljYmYxMGViZjY5MmU4NTNjNzVjMmUyMMoqew==: 00:17:17.657 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzUyZTQ4YjA0MDA5OWVjZmEzY2UwMWZkODBmODgxNzdS0yi4: ]] 00:17:17.657 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzUyZTQ4YjA0MDA5OWVjZmEzY2UwMWZkODBmODgxNzdS0yi4: 00:17:17.657 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:17:17.657 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:17.657 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:17.657 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:17.657 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:17.657 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:17.657 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:17.657 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.657 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.657 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.657 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:17.657 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:17.657 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:17.657 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:17.657 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:17.657 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:17.657 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:17.657 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:17.657 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:17.657 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:17.657 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:17.657 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:17.657 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.657 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.657 nvme0n1 00:17:17.657 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.657 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:17.657 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.657 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:17.657 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.657 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.916 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.916 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:17.916 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.916 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.916 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.916 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:17.916 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:17:17.916 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:17.916 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:17.916 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:17.916 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:17.916 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTIxZGJjODUxNjI3YWI0N2I1MjVhMGNmODhkMzIxOTVhOGZiMDRlYmZkNjJiMzUwY2JmMWQ1MWU5NjcwMzY1OHEnhLs=: 00:17:17.916 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:17.916 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:17.916 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:17.916 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTIxZGJjODUxNjI3YWI0N2I1MjVhMGNmODhkMzIxOTVhOGZiMDRlYmZkNjJiMzUwY2JmMWQ1MWU5NjcwMzY1OHEnhLs=: 00:17:17.916 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:17.916 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:17:17.916 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:17.916 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:17.916 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:17.916 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:17.916 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:17.916 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:17.916 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.916 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.916 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.916 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:17.916 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:17.916 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:17.916 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:17.916 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:17.916 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:17.916 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:17.916 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:17.916 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:17.916 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:17.916 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:17.916 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:17.916 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.916 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.916 nvme0n1 00:17:17.916 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.916 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:17.916 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.916 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.916 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:17.916 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.916 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.916 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:17.916 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.916 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.916 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.916 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:17.916 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:17.916 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:17:17.916 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:17.916 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:17.916 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:17.916 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:17.917 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzI5MjBlNWIyZDVhYmNkNDE5MTA2MTA3MDQ5MDYyY2Pxor+y: 00:17:17.917 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjEzYTE2OTc4NzI4YjJlYzVkYTM5ZjYzZjQ4ZWUxZjI2OTJkYWM4YTZmN2M1YjgyNGZhYjk5YTg0MTkxNGMzMNVgPYw=: 00:17:17.917 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:17.917 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:17.917 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzI5MjBlNWIyZDVhYmNkNDE5MTA2MTA3MDQ5MDYyY2Pxor+y: 00:17:17.917 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjEzYTE2OTc4NzI4YjJlYzVkYTM5ZjYzZjQ4ZWUxZjI2OTJkYWM4YTZmN2M1YjgyNGZhYjk5YTg0MTkxNGMzMNVgPYw=: ]] 00:17:17.917 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjEzYTE2OTc4NzI4YjJlYzVkYTM5ZjYzZjQ4ZWUxZjI2OTJkYWM4YTZmN2M1YjgyNGZhYjk5YTg0MTkxNGMzMNVgPYw=: 00:17:17.917 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:17:17.917 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:17.917 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:17.917 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:17.917 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:17.917 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:17.917 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:17.917 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.917 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.917 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.917 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:17.917 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:17.917 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:17.917 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:17.917 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:17.917 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:17.917 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:17.917 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:17.917 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:17.917 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:17.917 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:17.917 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.917 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.917 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.176 nvme0n1 00:17:18.176 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.176 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:18.176 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.176 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.176 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:18.176 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.176 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.176 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:18.176 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.176 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.176 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.176 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:18.176 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:17:18.176 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:18.176 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:18.176 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:18.176 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:18.176 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFkOGI4ODgzNTExNzBhOTc4MDAxMjE5NGI2YjJiZWI1YjQwMTcyNDhjZDA5M2FkOiUuSw==: 00:17:18.176 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTMzYzg5OGQyMTk2MDBkZjg2YzAzM2I1YzIyMmU4NzI3MzlkYTcwZTExNzYzMGRh1cPc5Q==: 00:17:18.176 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:18.176 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:18.176 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFkOGI4ODgzNTExNzBhOTc4MDAxMjE5NGI2YjJiZWI1YjQwMTcyNDhjZDA5M2FkOiUuSw==: 00:17:18.176 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTMzYzg5OGQyMTk2MDBkZjg2YzAzM2I1YzIyMmU4NzI3MzlkYTcwZTExNzYzMGRh1cPc5Q==: ]] 00:17:18.177 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTMzYzg5OGQyMTk2MDBkZjg2YzAzM2I1YzIyMmU4NzI3MzlkYTcwZTExNzYzMGRh1cPc5Q==: 00:17:18.177 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:17:18.177 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:18.177 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:18.177 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:18.177 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:18.177 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:18.177 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:18.177 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.177 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.177 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.177 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:18.177 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:18.177 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:18.177 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:18.177 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:18.177 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:18.177 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:18.177 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:18.177 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:18.177 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:18.177 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:18.177 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.177 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.177 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.436 nvme0n1 00:17:18.436 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.436 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:18.436 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.436 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.436 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:18.436 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.436 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.436 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:18.436 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.436 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.436 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.436 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:18.436 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:17:18.436 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:18.436 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:18.436 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:18.436 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:18.436 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTE5M2NiNzA4NmM1ZmFjMDY3ZWY4ZGNlYjljNjlkOWXxK34A: 00:17:18.436 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWE1ODQxZDk5MWY0NDJjZTllMWUzZWFiNjY3YzU2NzjSXE94: 00:17:18.436 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:18.436 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:18.436 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTE5M2NiNzA4NmM1ZmFjMDY3ZWY4ZGNlYjljNjlkOWXxK34A: 00:17:18.436 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWE1ODQxZDk5MWY0NDJjZTllMWUzZWFiNjY3YzU2NzjSXE94: ]] 00:17:18.436 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWE1ODQxZDk5MWY0NDJjZTllMWUzZWFiNjY3YzU2NzjSXE94: 00:17:18.436 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:17:18.436 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:18.436 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:18.436 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:18.436 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:18.436 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:18.436 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:18.436 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.436 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.436 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.436 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:18.436 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:18.436 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:18.436 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:18.436 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:18.436 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:18.436 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:18.436 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:18.436 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:18.436 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:18.436 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:18.436 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.436 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.436 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.436 nvme0n1 00:17:18.436 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.436 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:18.436 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:18.436 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.436 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.695 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.695 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.695 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:18.695 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.695 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.695 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.695 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:18.695 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:17:18.695 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:18.695 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:18.695 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:18.695 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:18.695 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzRiNTI4ZjU2NzdiODRkZjFmYjNhODk2ZTljYmYxMGViZjY5MmU4NTNjNzVjMmUyMMoqew==: 00:17:18.695 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzUyZTQ4YjA0MDA5OWVjZmEzY2UwMWZkODBmODgxNzdS0yi4: 00:17:18.695 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:18.695 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:18.695 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzRiNTI4ZjU2NzdiODRkZjFmYjNhODk2ZTljYmYxMGViZjY5MmU4NTNjNzVjMmUyMMoqew==: 00:17:18.695 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzUyZTQ4YjA0MDA5OWVjZmEzY2UwMWZkODBmODgxNzdS0yi4: ]] 00:17:18.695 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzUyZTQ4YjA0MDA5OWVjZmEzY2UwMWZkODBmODgxNzdS0yi4: 00:17:18.695 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:17:18.695 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:18.695 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:18.695 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:18.695 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:18.695 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:18.695 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:18.695 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.695 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.695 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.695 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:18.695 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:18.695 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:18.695 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:18.695 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:18.696 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:18.696 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:18.696 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:18.696 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:18.696 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:18.696 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:18.696 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:18.696 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.696 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.696 nvme0n1 00:17:18.696 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.696 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:18.696 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:18.696 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.696 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.696 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.696 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.696 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:18.696 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.696 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTIxZGJjODUxNjI3YWI0N2I1MjVhMGNmODhkMzIxOTVhOGZiMDRlYmZkNjJiMzUwY2JmMWQ1MWU5NjcwMzY1OHEnhLs=: 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTIxZGJjODUxNjI3YWI0N2I1MjVhMGNmODhkMzIxOTVhOGZiMDRlYmZkNjJiMzUwY2JmMWQ1MWU5NjcwMzY1OHEnhLs=: 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.955 nvme0n1 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzI5MjBlNWIyZDVhYmNkNDE5MTA2MTA3MDQ5MDYyY2Pxor+y: 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjEzYTE2OTc4NzI4YjJlYzVkYTM5ZjYzZjQ4ZWUxZjI2OTJkYWM4YTZmN2M1YjgyNGZhYjk5YTg0MTkxNGMzMNVgPYw=: 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzI5MjBlNWIyZDVhYmNkNDE5MTA2MTA3MDQ5MDYyY2Pxor+y: 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjEzYTE2OTc4NzI4YjJlYzVkYTM5ZjYzZjQ4ZWUxZjI2OTJkYWM4YTZmN2M1YjgyNGZhYjk5YTg0MTkxNGMzMNVgPYw=: ]] 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjEzYTE2OTc4NzI4YjJlYzVkYTM5ZjYzZjQ4ZWUxZjI2OTJkYWM4YTZmN2M1YjgyNGZhYjk5YTg0MTkxNGMzMNVgPYw=: 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.955 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.214 nvme0n1 00:17:19.214 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.215 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:19.215 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.215 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.215 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:19.215 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.215 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.215 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:19.215 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.215 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.215 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.215 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:19.215 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:17:19.215 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:19.215 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:19.215 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:19.215 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:19.215 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFkOGI4ODgzNTExNzBhOTc4MDAxMjE5NGI2YjJiZWI1YjQwMTcyNDhjZDA5M2FkOiUuSw==: 00:17:19.215 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTMzYzg5OGQyMTk2MDBkZjg2YzAzM2I1YzIyMmU4NzI3MzlkYTcwZTExNzYzMGRh1cPc5Q==: 00:17:19.215 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:19.215 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:19.215 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFkOGI4ODgzNTExNzBhOTc4MDAxMjE5NGI2YjJiZWI1YjQwMTcyNDhjZDA5M2FkOiUuSw==: 00:17:19.215 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTMzYzg5OGQyMTk2MDBkZjg2YzAzM2I1YzIyMmU4NzI3MzlkYTcwZTExNzYzMGRh1cPc5Q==: ]] 00:17:19.215 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTMzYzg5OGQyMTk2MDBkZjg2YzAzM2I1YzIyMmU4NzI3MzlkYTcwZTExNzYzMGRh1cPc5Q==: 00:17:19.215 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:17:19.215 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:19.215 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:19.215 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:19.215 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:19.215 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:19.215 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:19.215 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.215 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.215 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.215 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:19.215 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:19.215 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:19.215 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:19.215 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:19.215 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:19.215 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:19.215 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:19.215 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:19.215 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:19.215 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:19.215 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.215 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.215 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.475 nvme0n1 00:17:19.475 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.475 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:19.475 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:19.475 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.475 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.475 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.475 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.475 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:19.475 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.475 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.475 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.475 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:19.475 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:17:19.475 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:19.475 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:19.475 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:19.475 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:19.475 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTE5M2NiNzA4NmM1ZmFjMDY3ZWY4ZGNlYjljNjlkOWXxK34A: 00:17:19.475 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWE1ODQxZDk5MWY0NDJjZTllMWUzZWFiNjY3YzU2NzjSXE94: 00:17:19.475 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:19.475 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:19.475 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTE5M2NiNzA4NmM1ZmFjMDY3ZWY4ZGNlYjljNjlkOWXxK34A: 00:17:19.475 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWE1ODQxZDk5MWY0NDJjZTllMWUzZWFiNjY3YzU2NzjSXE94: ]] 00:17:19.475 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWE1ODQxZDk5MWY0NDJjZTllMWUzZWFiNjY3YzU2NzjSXE94: 00:17:19.475 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:17:19.475 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:19.475 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:19.475 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:19.475 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:19.475 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:19.475 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:19.734 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.734 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.734 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.734 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:19.734 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:19.734 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:19.734 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:19.734 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:19.734 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:19.734 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:19.734 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:19.734 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:19.734 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:19.734 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:19.734 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.734 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.734 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.734 nvme0n1 00:17:19.734 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.734 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:19.734 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:19.734 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.734 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.734 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.734 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.734 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:19.734 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.734 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.993 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.993 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:19.993 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:17:19.993 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:19.993 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:19.993 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:19.993 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:19.993 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzRiNTI4ZjU2NzdiODRkZjFmYjNhODk2ZTljYmYxMGViZjY5MmU4NTNjNzVjMmUyMMoqew==: 00:17:19.993 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzUyZTQ4YjA0MDA5OWVjZmEzY2UwMWZkODBmODgxNzdS0yi4: 00:17:19.993 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:19.993 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:19.993 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzRiNTI4ZjU2NzdiODRkZjFmYjNhODk2ZTljYmYxMGViZjY5MmU4NTNjNzVjMmUyMMoqew==: 00:17:19.993 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzUyZTQ4YjA0MDA5OWVjZmEzY2UwMWZkODBmODgxNzdS0yi4: ]] 00:17:19.993 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzUyZTQ4YjA0MDA5OWVjZmEzY2UwMWZkODBmODgxNzdS0yi4: 00:17:19.993 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:17:19.993 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:19.993 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:19.993 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:19.993 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:19.993 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:19.993 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:19.993 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.993 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.993 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.993 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:19.993 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:19.993 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:19.993 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:19.994 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:19.994 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:19.994 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:19.994 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:19.994 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:19.994 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:19.994 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:19.994 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:19.994 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.994 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.994 nvme0n1 00:17:19.994 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.994 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:19.994 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.994 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.994 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:19.994 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.994 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.994 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:19.994 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.994 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.253 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.253 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:20.253 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:17:20.253 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:20.253 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:20.253 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:20.253 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:20.253 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTIxZGJjODUxNjI3YWI0N2I1MjVhMGNmODhkMzIxOTVhOGZiMDRlYmZkNjJiMzUwY2JmMWQ1MWU5NjcwMzY1OHEnhLs=: 00:17:20.253 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:20.253 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:20.253 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:20.253 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTIxZGJjODUxNjI3YWI0N2I1MjVhMGNmODhkMzIxOTVhOGZiMDRlYmZkNjJiMzUwY2JmMWQ1MWU5NjcwMzY1OHEnhLs=: 00:17:20.253 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:20.253 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:17:20.253 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:20.253 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:20.253 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:20.253 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:20.253 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:20.253 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:20.253 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.253 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.253 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.253 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:20.253 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:20.253 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:20.253 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:20.253 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:20.253 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:20.253 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:20.253 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:20.253 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:20.253 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:20.253 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:20.253 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:20.253 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.253 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.253 nvme0n1 00:17:20.253 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.253 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:20.253 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.253 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.253 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:20.253 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.513 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.513 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:20.513 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.513 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.513 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.513 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:20.513 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:20.513 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:17:20.513 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:20.513 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:20.513 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:20.513 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:20.513 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzI5MjBlNWIyZDVhYmNkNDE5MTA2MTA3MDQ5MDYyY2Pxor+y: 00:17:20.513 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjEzYTE2OTc4NzI4YjJlYzVkYTM5ZjYzZjQ4ZWUxZjI2OTJkYWM4YTZmN2M1YjgyNGZhYjk5YTg0MTkxNGMzMNVgPYw=: 00:17:20.513 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:20.513 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:20.513 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzI5MjBlNWIyZDVhYmNkNDE5MTA2MTA3MDQ5MDYyY2Pxor+y: 00:17:20.513 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjEzYTE2OTc4NzI4YjJlYzVkYTM5ZjYzZjQ4ZWUxZjI2OTJkYWM4YTZmN2M1YjgyNGZhYjk5YTg0MTkxNGMzMNVgPYw=: ]] 00:17:20.513 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjEzYTE2OTc4NzI4YjJlYzVkYTM5ZjYzZjQ4ZWUxZjI2OTJkYWM4YTZmN2M1YjgyNGZhYjk5YTg0MTkxNGMzMNVgPYw=: 00:17:20.513 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:17:20.513 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:20.513 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:20.513 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:20.513 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:20.513 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:20.513 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:20.513 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.513 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.513 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.513 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:20.513 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:20.513 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:20.513 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:20.513 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:20.513 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:20.513 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:20.513 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:20.513 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:20.513 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:20.513 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:20.513 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.513 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.513 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.772 nvme0n1 00:17:20.772 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.772 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:20.772 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:20.772 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.772 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.772 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.772 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.772 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:20.772 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.772 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.772 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.772 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:20.772 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:17:20.772 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:20.772 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:20.772 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:20.772 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:20.772 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFkOGI4ODgzNTExNzBhOTc4MDAxMjE5NGI2YjJiZWI1YjQwMTcyNDhjZDA5M2FkOiUuSw==: 00:17:20.772 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTMzYzg5OGQyMTk2MDBkZjg2YzAzM2I1YzIyMmU4NzI3MzlkYTcwZTExNzYzMGRh1cPc5Q==: 00:17:20.772 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:20.772 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:20.772 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFkOGI4ODgzNTExNzBhOTc4MDAxMjE5NGI2YjJiZWI1YjQwMTcyNDhjZDA5M2FkOiUuSw==: 00:17:20.772 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTMzYzg5OGQyMTk2MDBkZjg2YzAzM2I1YzIyMmU4NzI3MzlkYTcwZTExNzYzMGRh1cPc5Q==: ]] 00:17:20.772 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTMzYzg5OGQyMTk2MDBkZjg2YzAzM2I1YzIyMmU4NzI3MzlkYTcwZTExNzYzMGRh1cPc5Q==: 00:17:20.772 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:17:20.772 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:20.772 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:20.772 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:20.772 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:20.772 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:20.772 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:20.772 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.772 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.772 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.772 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:20.772 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:20.772 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:20.772 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:20.772 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:20.772 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:20.772 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:20.772 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:20.772 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:20.772 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:20.772 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:20.772 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.773 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.773 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.340 nvme0n1 00:17:21.340 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.340 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:21.340 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.340 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:21.340 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.340 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.340 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.340 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:21.340 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.340 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.340 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.340 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:21.340 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:17:21.341 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:21.341 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:21.341 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:21.341 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:21.341 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTE5M2NiNzA4NmM1ZmFjMDY3ZWY4ZGNlYjljNjlkOWXxK34A: 00:17:21.341 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWE1ODQxZDk5MWY0NDJjZTllMWUzZWFiNjY3YzU2NzjSXE94: 00:17:21.341 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:21.341 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:21.341 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTE5M2NiNzA4NmM1ZmFjMDY3ZWY4ZGNlYjljNjlkOWXxK34A: 00:17:21.341 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWE1ODQxZDk5MWY0NDJjZTllMWUzZWFiNjY3YzU2NzjSXE94: ]] 00:17:21.341 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWE1ODQxZDk5MWY0NDJjZTllMWUzZWFiNjY3YzU2NzjSXE94: 00:17:21.341 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:17:21.341 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:21.341 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:21.341 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:21.341 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:21.341 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:21.341 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:21.341 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.341 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.341 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.341 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:21.341 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:21.341 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:21.341 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:21.341 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:21.341 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:21.341 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:21.341 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:21.341 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:21.341 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:21.341 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:21.341 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.341 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.341 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.600 nvme0n1 00:17:21.600 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.600 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:21.600 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:21.600 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.600 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.600 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.600 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.600 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:21.600 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.600 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.600 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.600 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:21.600 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:17:21.600 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:21.600 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:21.600 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:21.600 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:21.600 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzRiNTI4ZjU2NzdiODRkZjFmYjNhODk2ZTljYmYxMGViZjY5MmU4NTNjNzVjMmUyMMoqew==: 00:17:21.600 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzUyZTQ4YjA0MDA5OWVjZmEzY2UwMWZkODBmODgxNzdS0yi4: 00:17:21.600 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:21.600 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:21.600 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzRiNTI4ZjU2NzdiODRkZjFmYjNhODk2ZTljYmYxMGViZjY5MmU4NTNjNzVjMmUyMMoqew==: 00:17:21.600 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzUyZTQ4YjA0MDA5OWVjZmEzY2UwMWZkODBmODgxNzdS0yi4: ]] 00:17:21.600 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzUyZTQ4YjA0MDA5OWVjZmEzY2UwMWZkODBmODgxNzdS0yi4: 00:17:21.600 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:17:21.600 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:21.600 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:21.600 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:21.600 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:21.600 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:21.600 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:21.600 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.600 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.600 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.600 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:21.600 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:21.600 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:21.601 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:21.601 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:21.601 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:21.601 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:21.601 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:21.601 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:21.601 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:21.601 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:21.601 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:21.601 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.601 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.199 nvme0n1 00:17:22.199 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.199 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:22.199 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:22.199 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.199 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.199 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.200 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.200 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:22.200 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.200 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.200 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.200 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:22.200 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:17:22.200 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:22.200 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:22.200 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:22.200 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:22.200 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTIxZGJjODUxNjI3YWI0N2I1MjVhMGNmODhkMzIxOTVhOGZiMDRlYmZkNjJiMzUwY2JmMWQ1MWU5NjcwMzY1OHEnhLs=: 00:17:22.200 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:22.200 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:22.200 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:22.200 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTIxZGJjODUxNjI3YWI0N2I1MjVhMGNmODhkMzIxOTVhOGZiMDRlYmZkNjJiMzUwY2JmMWQ1MWU5NjcwMzY1OHEnhLs=: 00:17:22.200 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:22.200 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:17:22.200 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:22.200 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:22.200 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:22.200 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:22.200 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:22.200 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:22.200 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.200 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.200 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.200 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:22.200 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:22.200 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:22.200 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:22.200 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:22.200 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:22.200 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:22.200 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:22.200 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:22.200 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:22.200 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:22.200 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:22.200 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.200 10:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.459 nvme0n1 00:17:22.459 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.459 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:22.459 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.459 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:22.459 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.459 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.459 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.459 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:22.459 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.459 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.459 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.459 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:22.459 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:22.459 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:17:22.459 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:22.459 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:22.459 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:22.459 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:22.459 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzI5MjBlNWIyZDVhYmNkNDE5MTA2MTA3MDQ5MDYyY2Pxor+y: 00:17:22.459 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjEzYTE2OTc4NzI4YjJlYzVkYTM5ZjYzZjQ4ZWUxZjI2OTJkYWM4YTZmN2M1YjgyNGZhYjk5YTg0MTkxNGMzMNVgPYw=: 00:17:22.459 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:22.460 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:22.460 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzI5MjBlNWIyZDVhYmNkNDE5MTA2MTA3MDQ5MDYyY2Pxor+y: 00:17:22.460 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjEzYTE2OTc4NzI4YjJlYzVkYTM5ZjYzZjQ4ZWUxZjI2OTJkYWM4YTZmN2M1YjgyNGZhYjk5YTg0MTkxNGMzMNVgPYw=: ]] 00:17:22.460 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjEzYTE2OTc4NzI4YjJlYzVkYTM5ZjYzZjQ4ZWUxZjI2OTJkYWM4YTZmN2M1YjgyNGZhYjk5YTg0MTkxNGMzMNVgPYw=: 00:17:22.460 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:17:22.460 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:22.460 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:22.460 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:22.460 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:22.460 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:22.460 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:22.460 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.460 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.460 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.460 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:22.460 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:22.460 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:22.460 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:22.460 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:22.460 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:22.460 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:22.460 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:22.460 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:22.460 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:22.460 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:22.460 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.460 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.460 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.397 nvme0n1 00:17:23.397 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.397 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:23.397 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:23.397 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.397 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.397 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.397 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.397 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:23.397 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.397 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.397 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.397 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:23.397 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:17:23.397 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:23.397 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:23.397 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:23.397 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:23.397 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFkOGI4ODgzNTExNzBhOTc4MDAxMjE5NGI2YjJiZWI1YjQwMTcyNDhjZDA5M2FkOiUuSw==: 00:17:23.397 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTMzYzg5OGQyMTk2MDBkZjg2YzAzM2I1YzIyMmU4NzI3MzlkYTcwZTExNzYzMGRh1cPc5Q==: 00:17:23.397 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:23.397 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:23.397 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFkOGI4ODgzNTExNzBhOTc4MDAxMjE5NGI2YjJiZWI1YjQwMTcyNDhjZDA5M2FkOiUuSw==: 00:17:23.397 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTMzYzg5OGQyMTk2MDBkZjg2YzAzM2I1YzIyMmU4NzI3MzlkYTcwZTExNzYzMGRh1cPc5Q==: ]] 00:17:23.397 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTMzYzg5OGQyMTk2MDBkZjg2YzAzM2I1YzIyMmU4NzI3MzlkYTcwZTExNzYzMGRh1cPc5Q==: 00:17:23.397 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:17:23.397 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:23.397 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:23.397 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:23.397 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:23.397 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:23.397 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:23.397 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.397 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.397 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.397 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:23.397 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:23.397 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:23.397 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:23.397 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:23.397 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:23.397 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:23.397 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:23.397 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:23.397 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:23.397 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:23.397 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.397 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.397 10:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.965 nvme0n1 00:17:23.965 10:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.965 10:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:23.965 10:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.965 10:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:23.965 10:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.965 10:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.965 10:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.965 10:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:23.965 10:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.965 10:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.965 10:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.965 10:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:23.965 10:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:17:23.965 10:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:23.965 10:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:23.965 10:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:23.965 10:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:23.965 10:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTE5M2NiNzA4NmM1ZmFjMDY3ZWY4ZGNlYjljNjlkOWXxK34A: 00:17:23.965 10:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWE1ODQxZDk5MWY0NDJjZTllMWUzZWFiNjY3YzU2NzjSXE94: 00:17:23.965 10:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:23.965 10:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:23.965 10:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTE5M2NiNzA4NmM1ZmFjMDY3ZWY4ZGNlYjljNjlkOWXxK34A: 00:17:23.965 10:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWE1ODQxZDk5MWY0NDJjZTllMWUzZWFiNjY3YzU2NzjSXE94: ]] 00:17:23.965 10:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWE1ODQxZDk5MWY0NDJjZTllMWUzZWFiNjY3YzU2NzjSXE94: 00:17:23.965 10:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:17:23.965 10:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:23.965 10:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:23.965 10:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:23.965 10:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:23.965 10:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:23.965 10:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:23.965 10:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.965 10:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.966 10:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.966 10:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:23.966 10:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:23.966 10:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:23.966 10:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:23.966 10:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:23.966 10:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:23.966 10:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:23.966 10:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:23.966 10:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:23.966 10:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:23.966 10:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:23.966 10:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.966 10:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.966 10:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.533 nvme0n1 00:17:24.533 10:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.533 10:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:24.533 10:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:24.533 10:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.533 10:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.533 10:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.533 10:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.533 10:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:24.533 10:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.533 10:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.533 10:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.533 10:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:24.533 10:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:17:24.533 10:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:24.533 10:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:24.533 10:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:24.533 10:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:24.533 10:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzRiNTI4ZjU2NzdiODRkZjFmYjNhODk2ZTljYmYxMGViZjY5MmU4NTNjNzVjMmUyMMoqew==: 00:17:24.533 10:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzUyZTQ4YjA0MDA5OWVjZmEzY2UwMWZkODBmODgxNzdS0yi4: 00:17:24.533 10:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:24.533 10:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:24.533 10:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzRiNTI4ZjU2NzdiODRkZjFmYjNhODk2ZTljYmYxMGViZjY5MmU4NTNjNzVjMmUyMMoqew==: 00:17:24.533 10:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzUyZTQ4YjA0MDA5OWVjZmEzY2UwMWZkODBmODgxNzdS0yi4: ]] 00:17:24.533 10:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzUyZTQ4YjA0MDA5OWVjZmEzY2UwMWZkODBmODgxNzdS0yi4: 00:17:24.533 10:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:17:24.533 10:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:24.533 10:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:24.533 10:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:24.533 10:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:24.533 10:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:24.533 10:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:24.533 10:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.533 10:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.533 10:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.533 10:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:24.533 10:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:24.533 10:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:24.533 10:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:24.533 10:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:24.533 10:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:24.533 10:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:24.533 10:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:24.533 10:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:24.533 10:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:24.533 10:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:24.533 10:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:24.533 10:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.533 10:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.471 nvme0n1 00:17:25.471 10:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.471 10:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:25.471 10:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:25.471 10:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.471 10:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.471 10:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.471 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.471 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:25.471 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.471 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.471 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.471 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:25.471 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:17:25.471 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:25.471 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:25.471 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:25.471 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:25.471 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTIxZGJjODUxNjI3YWI0N2I1MjVhMGNmODhkMzIxOTVhOGZiMDRlYmZkNjJiMzUwY2JmMWQ1MWU5NjcwMzY1OHEnhLs=: 00:17:25.471 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:25.471 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:25.471 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:25.471 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTIxZGJjODUxNjI3YWI0N2I1MjVhMGNmODhkMzIxOTVhOGZiMDRlYmZkNjJiMzUwY2JmMWQ1MWU5NjcwMzY1OHEnhLs=: 00:17:25.471 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:25.471 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:17:25.471 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:25.471 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:25.471 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:25.471 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:25.471 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:25.471 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:25.471 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.471 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.471 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.471 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:25.471 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:25.471 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:25.471 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:25.471 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:25.471 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:25.471 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:25.471 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:25.471 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:25.471 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:25.471 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:25.472 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:25.472 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.472 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.040 nvme0n1 00:17:26.040 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.040 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:26.040 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:26.040 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.040 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.040 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.040 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.040 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:26.040 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.040 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.040 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.040 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:26.040 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:26.040 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:26.040 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:17:26.040 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:26.040 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:26.040 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:26.040 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:26.040 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzI5MjBlNWIyZDVhYmNkNDE5MTA2MTA3MDQ5MDYyY2Pxor+y: 00:17:26.040 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjEzYTE2OTc4NzI4YjJlYzVkYTM5ZjYzZjQ4ZWUxZjI2OTJkYWM4YTZmN2M1YjgyNGZhYjk5YTg0MTkxNGMzMNVgPYw=: 00:17:26.040 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:26.040 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:26.040 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzI5MjBlNWIyZDVhYmNkNDE5MTA2MTA3MDQ5MDYyY2Pxor+y: 00:17:26.040 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjEzYTE2OTc4NzI4YjJlYzVkYTM5ZjYzZjQ4ZWUxZjI2OTJkYWM4YTZmN2M1YjgyNGZhYjk5YTg0MTkxNGMzMNVgPYw=: ]] 00:17:26.040 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjEzYTE2OTc4NzI4YjJlYzVkYTM5ZjYzZjQ4ZWUxZjI2OTJkYWM4YTZmN2M1YjgyNGZhYjk5YTg0MTkxNGMzMNVgPYw=: 00:17:26.040 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:17:26.040 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:26.040 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:26.040 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:26.040 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:26.040 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:26.040 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:26.040 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.040 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.040 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.040 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:26.040 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:26.040 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:26.040 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:26.040 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:26.040 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:26.040 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:26.040 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:26.040 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:26.040 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:26.040 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:26.040 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.040 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.040 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.040 nvme0n1 00:17:26.040 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.040 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:26.040 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.040 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:26.040 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.040 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.301 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.301 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:26.301 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.301 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.301 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.301 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:26.301 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:17:26.301 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:26.301 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:26.301 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:26.301 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:26.301 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFkOGI4ODgzNTExNzBhOTc4MDAxMjE5NGI2YjJiZWI1YjQwMTcyNDhjZDA5M2FkOiUuSw==: 00:17:26.301 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTMzYzg5OGQyMTk2MDBkZjg2YzAzM2I1YzIyMmU4NzI3MzlkYTcwZTExNzYzMGRh1cPc5Q==: 00:17:26.301 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:26.301 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:26.301 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFkOGI4ODgzNTExNzBhOTc4MDAxMjE5NGI2YjJiZWI1YjQwMTcyNDhjZDA5M2FkOiUuSw==: 00:17:26.301 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTMzYzg5OGQyMTk2MDBkZjg2YzAzM2I1YzIyMmU4NzI3MzlkYTcwZTExNzYzMGRh1cPc5Q==: ]] 00:17:26.301 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTMzYzg5OGQyMTk2MDBkZjg2YzAzM2I1YzIyMmU4NzI3MzlkYTcwZTExNzYzMGRh1cPc5Q==: 00:17:26.301 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:17:26.301 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:26.301 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:26.301 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:26.301 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:26.301 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:26.301 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:26.301 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.301 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.301 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.301 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:26.301 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:26.301 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:26.301 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:26.301 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:26.301 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:26.301 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:26.301 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:26.301 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:26.301 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:26.301 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:26.301 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.301 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.301 10:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.301 nvme0n1 00:17:26.301 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.301 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:26.301 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:26.301 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.301 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.301 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.301 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.301 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:26.301 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.301 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.301 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.301 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:26.301 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:17:26.301 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:26.301 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:26.301 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:26.301 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:26.301 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTE5M2NiNzA4NmM1ZmFjMDY3ZWY4ZGNlYjljNjlkOWXxK34A: 00:17:26.301 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWE1ODQxZDk5MWY0NDJjZTllMWUzZWFiNjY3YzU2NzjSXE94: 00:17:26.301 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:26.301 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:26.301 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTE5M2NiNzA4NmM1ZmFjMDY3ZWY4ZGNlYjljNjlkOWXxK34A: 00:17:26.301 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWE1ODQxZDk5MWY0NDJjZTllMWUzZWFiNjY3YzU2NzjSXE94: ]] 00:17:26.301 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWE1ODQxZDk5MWY0NDJjZTllMWUzZWFiNjY3YzU2NzjSXE94: 00:17:26.301 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:17:26.301 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:26.301 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:26.301 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:26.301 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:26.301 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:26.301 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:26.301 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.301 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.301 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.301 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:26.301 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:26.301 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:26.301 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:26.301 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:26.301 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:26.301 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:26.302 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:26.302 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:26.302 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:26.302 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:26.302 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.302 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.302 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.560 nvme0n1 00:17:26.560 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.560 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:26.560 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.560 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.560 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:26.560 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.560 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.560 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:26.560 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.560 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.560 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.560 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:26.560 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:17:26.560 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:26.560 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:26.560 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:26.560 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:26.560 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzRiNTI4ZjU2NzdiODRkZjFmYjNhODk2ZTljYmYxMGViZjY5MmU4NTNjNzVjMmUyMMoqew==: 00:17:26.560 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzUyZTQ4YjA0MDA5OWVjZmEzY2UwMWZkODBmODgxNzdS0yi4: 00:17:26.560 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:26.560 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:26.560 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzRiNTI4ZjU2NzdiODRkZjFmYjNhODk2ZTljYmYxMGViZjY5MmU4NTNjNzVjMmUyMMoqew==: 00:17:26.560 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzUyZTQ4YjA0MDA5OWVjZmEzY2UwMWZkODBmODgxNzdS0yi4: ]] 00:17:26.560 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzUyZTQ4YjA0MDA5OWVjZmEzY2UwMWZkODBmODgxNzdS0yi4: 00:17:26.560 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:17:26.560 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:26.560 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:26.560 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:26.560 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:26.560 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:26.560 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:26.560 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.560 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.560 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.560 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:26.560 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:26.560 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:26.560 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:26.560 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:26.560 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:26.560 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:26.560 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:26.560 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:26.560 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:26.560 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:26.560 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:26.560 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.560 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.560 nvme0n1 00:17:26.560 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.560 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:26.560 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:26.560 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.560 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.820 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.820 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.820 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:26.820 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.820 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.820 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.820 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:26.820 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:17:26.820 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:26.820 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:26.820 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:26.820 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:26.820 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTIxZGJjODUxNjI3YWI0N2I1MjVhMGNmODhkMzIxOTVhOGZiMDRlYmZkNjJiMzUwY2JmMWQ1MWU5NjcwMzY1OHEnhLs=: 00:17:26.820 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:26.820 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:26.820 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:26.820 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTIxZGJjODUxNjI3YWI0N2I1MjVhMGNmODhkMzIxOTVhOGZiMDRlYmZkNjJiMzUwY2JmMWQ1MWU5NjcwMzY1OHEnhLs=: 00:17:26.820 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:26.820 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:17:26.820 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:26.820 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:26.820 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:26.820 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:26.820 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:26.820 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:26.820 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.820 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.820 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.820 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:26.820 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:26.820 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:26.820 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:26.820 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:26.820 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:26.820 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:26.820 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:26.820 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:26.820 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:26.820 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:26.820 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:26.820 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.820 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.820 nvme0n1 00:17:26.820 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.820 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:26.820 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.820 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:26.820 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.820 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.820 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.820 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:26.820 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.820 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.080 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.080 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:27.080 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:27.080 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:17:27.080 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:27.080 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:27.080 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:27.080 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:27.080 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzI5MjBlNWIyZDVhYmNkNDE5MTA2MTA3MDQ5MDYyY2Pxor+y: 00:17:27.080 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjEzYTE2OTc4NzI4YjJlYzVkYTM5ZjYzZjQ4ZWUxZjI2OTJkYWM4YTZmN2M1YjgyNGZhYjk5YTg0MTkxNGMzMNVgPYw=: 00:17:27.080 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:27.080 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:27.080 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzI5MjBlNWIyZDVhYmNkNDE5MTA2MTA3MDQ5MDYyY2Pxor+y: 00:17:27.080 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjEzYTE2OTc4NzI4YjJlYzVkYTM5ZjYzZjQ4ZWUxZjI2OTJkYWM4YTZmN2M1YjgyNGZhYjk5YTg0MTkxNGMzMNVgPYw=: ]] 00:17:27.080 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjEzYTE2OTc4NzI4YjJlYzVkYTM5ZjYzZjQ4ZWUxZjI2OTJkYWM4YTZmN2M1YjgyNGZhYjk5YTg0MTkxNGMzMNVgPYw=: 00:17:27.080 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:17:27.080 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:27.080 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:27.080 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:27.080 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:27.080 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:27.080 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:27.080 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.080 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.080 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.080 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:27.080 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:27.080 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:27.080 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:27.081 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:27.081 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:27.081 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:27.081 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:27.081 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:27.081 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:27.081 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:27.081 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.081 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.081 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.081 nvme0n1 00:17:27.081 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.081 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:27.081 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.081 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.081 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:27.081 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.081 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.081 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:27.081 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.081 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.081 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.081 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:27.081 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:17:27.081 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:27.081 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:27.081 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:27.081 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:27.081 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFkOGI4ODgzNTExNzBhOTc4MDAxMjE5NGI2YjJiZWI1YjQwMTcyNDhjZDA5M2FkOiUuSw==: 00:17:27.081 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTMzYzg5OGQyMTk2MDBkZjg2YzAzM2I1YzIyMmU4NzI3MzlkYTcwZTExNzYzMGRh1cPc5Q==: 00:17:27.081 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:27.081 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:27.081 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFkOGI4ODgzNTExNzBhOTc4MDAxMjE5NGI2YjJiZWI1YjQwMTcyNDhjZDA5M2FkOiUuSw==: 00:17:27.081 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTMzYzg5OGQyMTk2MDBkZjg2YzAzM2I1YzIyMmU4NzI3MzlkYTcwZTExNzYzMGRh1cPc5Q==: ]] 00:17:27.081 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTMzYzg5OGQyMTk2MDBkZjg2YzAzM2I1YzIyMmU4NzI3MzlkYTcwZTExNzYzMGRh1cPc5Q==: 00:17:27.081 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:17:27.081 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:27.081 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:27.081 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:27.081 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:27.081 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:27.081 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:27.081 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.081 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.081 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.081 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:27.081 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:27.081 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:27.081 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:27.081 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:27.081 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:27.081 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:27.081 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:27.081 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:27.081 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:27.081 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:27.081 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.081 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.081 10:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.341 nvme0n1 00:17:27.341 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.341 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:27.341 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:27.341 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.341 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.341 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.341 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.341 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:27.341 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.341 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.341 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.341 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:27.341 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:17:27.341 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:27.341 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:27.341 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:27.341 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:27.341 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTE5M2NiNzA4NmM1ZmFjMDY3ZWY4ZGNlYjljNjlkOWXxK34A: 00:17:27.341 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWE1ODQxZDk5MWY0NDJjZTllMWUzZWFiNjY3YzU2NzjSXE94: 00:17:27.341 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:27.341 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:27.341 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTE5M2NiNzA4NmM1ZmFjMDY3ZWY4ZGNlYjljNjlkOWXxK34A: 00:17:27.341 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWE1ODQxZDk5MWY0NDJjZTllMWUzZWFiNjY3YzU2NzjSXE94: ]] 00:17:27.341 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWE1ODQxZDk5MWY0NDJjZTllMWUzZWFiNjY3YzU2NzjSXE94: 00:17:27.341 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:17:27.341 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:27.341 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:27.341 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:27.341 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:27.341 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:27.341 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:27.341 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.341 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.341 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.341 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:27.341 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:27.341 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:27.341 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:27.341 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:27.341 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:27.341 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:27.341 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:27.341 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:27.341 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:27.341 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:27.341 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.341 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.341 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.600 nvme0n1 00:17:27.600 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.600 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:27.600 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:27.600 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.600 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.600 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.600 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.600 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:27.600 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.600 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.600 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.600 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:27.601 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:17:27.601 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:27.601 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:27.601 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:27.601 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:27.601 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzRiNTI4ZjU2NzdiODRkZjFmYjNhODk2ZTljYmYxMGViZjY5MmU4NTNjNzVjMmUyMMoqew==: 00:17:27.601 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzUyZTQ4YjA0MDA5OWVjZmEzY2UwMWZkODBmODgxNzdS0yi4: 00:17:27.601 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:27.601 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:27.601 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzRiNTI4ZjU2NzdiODRkZjFmYjNhODk2ZTljYmYxMGViZjY5MmU4NTNjNzVjMmUyMMoqew==: 00:17:27.601 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzUyZTQ4YjA0MDA5OWVjZmEzY2UwMWZkODBmODgxNzdS0yi4: ]] 00:17:27.601 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzUyZTQ4YjA0MDA5OWVjZmEzY2UwMWZkODBmODgxNzdS0yi4: 00:17:27.601 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:17:27.601 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:27.601 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:27.601 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:27.601 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:27.601 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:27.601 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:27.601 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.601 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.601 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.601 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:27.601 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:27.601 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:27.601 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:27.601 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:27.601 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:27.601 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:27.601 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:27.601 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:27.601 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:27.601 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:27.601 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:27.601 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.601 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.601 nvme0n1 00:17:27.601 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.860 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:27.860 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:27.860 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.861 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.861 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.861 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.861 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:27.861 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.861 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.861 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.861 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:27.861 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:17:27.861 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:27.861 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:27.861 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:27.861 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:27.861 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTIxZGJjODUxNjI3YWI0N2I1MjVhMGNmODhkMzIxOTVhOGZiMDRlYmZkNjJiMzUwY2JmMWQ1MWU5NjcwMzY1OHEnhLs=: 00:17:27.861 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:27.861 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:27.861 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:27.861 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTIxZGJjODUxNjI3YWI0N2I1MjVhMGNmODhkMzIxOTVhOGZiMDRlYmZkNjJiMzUwY2JmMWQ1MWU5NjcwMzY1OHEnhLs=: 00:17:27.861 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:27.861 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:17:27.861 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:27.861 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:27.861 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:27.861 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:27.861 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:27.861 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:27.861 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.861 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.861 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.861 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:27.861 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:27.861 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:27.861 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:27.861 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:27.861 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:27.861 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:27.861 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:27.861 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:27.861 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:27.861 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:27.861 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:27.861 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.861 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.861 nvme0n1 00:17:27.861 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.861 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:27.861 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.861 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.861 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:27.861 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.120 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.120 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:28.120 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.120 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.120 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.120 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:28.120 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:28.120 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:17:28.120 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:28.120 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:28.120 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:28.120 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:28.120 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzI5MjBlNWIyZDVhYmNkNDE5MTA2MTA3MDQ5MDYyY2Pxor+y: 00:17:28.120 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjEzYTE2OTc4NzI4YjJlYzVkYTM5ZjYzZjQ4ZWUxZjI2OTJkYWM4YTZmN2M1YjgyNGZhYjk5YTg0MTkxNGMzMNVgPYw=: 00:17:28.120 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:28.120 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:28.120 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzI5MjBlNWIyZDVhYmNkNDE5MTA2MTA3MDQ5MDYyY2Pxor+y: 00:17:28.120 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjEzYTE2OTc4NzI4YjJlYzVkYTM5ZjYzZjQ4ZWUxZjI2OTJkYWM4YTZmN2M1YjgyNGZhYjk5YTg0MTkxNGMzMNVgPYw=: ]] 00:17:28.120 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjEzYTE2OTc4NzI4YjJlYzVkYTM5ZjYzZjQ4ZWUxZjI2OTJkYWM4YTZmN2M1YjgyNGZhYjk5YTg0MTkxNGMzMNVgPYw=: 00:17:28.120 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:17:28.120 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:28.120 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:28.120 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:28.120 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:28.120 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:28.120 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:28.120 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.120 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.120 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.120 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:28.120 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:28.120 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:28.120 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:28.120 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:28.120 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:28.120 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:28.120 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:28.120 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:28.120 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:28.120 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:28.120 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.120 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.120 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.120 nvme0n1 00:17:28.120 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.120 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:28.120 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.120 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:28.120 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.120 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.380 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.380 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:28.380 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.380 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.380 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.380 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:28.380 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:17:28.380 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:28.380 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:28.380 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:28.380 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:28.380 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFkOGI4ODgzNTExNzBhOTc4MDAxMjE5NGI2YjJiZWI1YjQwMTcyNDhjZDA5M2FkOiUuSw==: 00:17:28.380 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTMzYzg5OGQyMTk2MDBkZjg2YzAzM2I1YzIyMmU4NzI3MzlkYTcwZTExNzYzMGRh1cPc5Q==: 00:17:28.380 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:28.380 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:28.380 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFkOGI4ODgzNTExNzBhOTc4MDAxMjE5NGI2YjJiZWI1YjQwMTcyNDhjZDA5M2FkOiUuSw==: 00:17:28.380 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTMzYzg5OGQyMTk2MDBkZjg2YzAzM2I1YzIyMmU4NzI3MzlkYTcwZTExNzYzMGRh1cPc5Q==: ]] 00:17:28.380 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTMzYzg5OGQyMTk2MDBkZjg2YzAzM2I1YzIyMmU4NzI3MzlkYTcwZTExNzYzMGRh1cPc5Q==: 00:17:28.380 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:17:28.380 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:28.380 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:28.380 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:28.380 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:28.380 10:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:28.380 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:28.380 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.380 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.381 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.381 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:28.381 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:28.381 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:28.381 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:28.381 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:28.381 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:28.381 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:28.381 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:28.381 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:28.381 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:28.381 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:28.381 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.381 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.381 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.381 nvme0n1 00:17:28.381 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.381 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:28.381 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.381 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.381 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:28.381 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.639 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.639 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:28.639 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.639 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.639 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.639 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:28.639 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:17:28.639 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:28.639 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:28.639 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:28.639 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:28.639 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTE5M2NiNzA4NmM1ZmFjMDY3ZWY4ZGNlYjljNjlkOWXxK34A: 00:17:28.639 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWE1ODQxZDk5MWY0NDJjZTllMWUzZWFiNjY3YzU2NzjSXE94: 00:17:28.639 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:28.639 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:28.639 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTE5M2NiNzA4NmM1ZmFjMDY3ZWY4ZGNlYjljNjlkOWXxK34A: 00:17:28.639 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWE1ODQxZDk5MWY0NDJjZTllMWUzZWFiNjY3YzU2NzjSXE94: ]] 00:17:28.639 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWE1ODQxZDk5MWY0NDJjZTllMWUzZWFiNjY3YzU2NzjSXE94: 00:17:28.639 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:17:28.639 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:28.639 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:28.639 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:28.639 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:28.639 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:28.639 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:28.639 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.639 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.639 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.639 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:28.639 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:28.639 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:28.639 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:28.639 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:28.639 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:28.639 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:28.639 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:28.639 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:28.639 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:28.639 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:28.639 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.639 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.639 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.639 nvme0n1 00:17:28.639 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.639 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:28.639 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:28.639 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.639 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.897 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.897 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.897 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:28.897 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.897 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.897 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.897 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:28.897 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:17:28.897 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:28.897 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:28.897 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:28.897 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:28.898 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzRiNTI4ZjU2NzdiODRkZjFmYjNhODk2ZTljYmYxMGViZjY5MmU4NTNjNzVjMmUyMMoqew==: 00:17:28.898 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzUyZTQ4YjA0MDA5OWVjZmEzY2UwMWZkODBmODgxNzdS0yi4: 00:17:28.898 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:28.898 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:28.898 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzRiNTI4ZjU2NzdiODRkZjFmYjNhODk2ZTljYmYxMGViZjY5MmU4NTNjNzVjMmUyMMoqew==: 00:17:28.898 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzUyZTQ4YjA0MDA5OWVjZmEzY2UwMWZkODBmODgxNzdS0yi4: ]] 00:17:28.898 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzUyZTQ4YjA0MDA5OWVjZmEzY2UwMWZkODBmODgxNzdS0yi4: 00:17:28.898 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:17:28.898 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:28.898 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:28.898 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:28.898 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:28.898 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:28.898 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:28.898 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.898 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.898 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.898 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:28.898 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:28.898 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:28.898 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:28.898 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:28.898 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:28.898 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:28.898 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:28.898 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:28.898 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:28.898 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:28.898 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:28.898 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.898 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.898 nvme0n1 00:17:29.156 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.156 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:29.156 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:29.156 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.156 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.156 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.156 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.156 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:29.156 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.157 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.157 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.157 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:29.157 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:17:29.157 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:29.157 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:29.157 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:29.157 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:29.157 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTIxZGJjODUxNjI3YWI0N2I1MjVhMGNmODhkMzIxOTVhOGZiMDRlYmZkNjJiMzUwY2JmMWQ1MWU5NjcwMzY1OHEnhLs=: 00:17:29.157 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:29.157 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:29.157 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:29.157 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTIxZGJjODUxNjI3YWI0N2I1MjVhMGNmODhkMzIxOTVhOGZiMDRlYmZkNjJiMzUwY2JmMWQ1MWU5NjcwMzY1OHEnhLs=: 00:17:29.157 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:29.157 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:17:29.157 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:29.157 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:29.157 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:29.157 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:29.157 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:29.157 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:29.157 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.157 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.157 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.157 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:29.157 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:29.157 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:29.157 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:29.157 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:29.157 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:29.157 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:29.157 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:29.157 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:29.157 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:29.157 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:29.157 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:29.157 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.157 10:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.415 nvme0n1 00:17:29.415 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.415 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:29.415 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:29.415 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.415 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.415 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.415 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.415 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:29.415 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.415 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.415 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.415 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:29.415 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:29.415 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:17:29.415 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:29.415 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:29.415 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:29.415 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:29.415 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzI5MjBlNWIyZDVhYmNkNDE5MTA2MTA3MDQ5MDYyY2Pxor+y: 00:17:29.415 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjEzYTE2OTc4NzI4YjJlYzVkYTM5ZjYzZjQ4ZWUxZjI2OTJkYWM4YTZmN2M1YjgyNGZhYjk5YTg0MTkxNGMzMNVgPYw=: 00:17:29.415 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:29.415 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:29.415 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzI5MjBlNWIyZDVhYmNkNDE5MTA2MTA3MDQ5MDYyY2Pxor+y: 00:17:29.415 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjEzYTE2OTc4NzI4YjJlYzVkYTM5ZjYzZjQ4ZWUxZjI2OTJkYWM4YTZmN2M1YjgyNGZhYjk5YTg0MTkxNGMzMNVgPYw=: ]] 00:17:29.415 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjEzYTE2OTc4NzI4YjJlYzVkYTM5ZjYzZjQ4ZWUxZjI2OTJkYWM4YTZmN2M1YjgyNGZhYjk5YTg0MTkxNGMzMNVgPYw=: 00:17:29.415 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:17:29.415 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:29.415 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:29.415 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:29.415 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:29.415 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:29.415 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:29.416 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.416 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.416 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.416 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:29.416 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:29.416 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:29.416 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:29.416 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:29.416 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:29.416 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:29.416 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:29.416 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:29.416 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:29.416 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:29.416 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.416 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.416 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.706 nvme0n1 00:17:29.706 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.706 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:29.706 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:29.706 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.706 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.706 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.706 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.706 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:29.706 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.706 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.706 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.706 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:29.706 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:17:29.706 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:29.706 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:29.706 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:29.706 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:29.706 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFkOGI4ODgzNTExNzBhOTc4MDAxMjE5NGI2YjJiZWI1YjQwMTcyNDhjZDA5M2FkOiUuSw==: 00:17:29.706 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTMzYzg5OGQyMTk2MDBkZjg2YzAzM2I1YzIyMmU4NzI3MzlkYTcwZTExNzYzMGRh1cPc5Q==: 00:17:29.706 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:29.706 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:29.706 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFkOGI4ODgzNTExNzBhOTc4MDAxMjE5NGI2YjJiZWI1YjQwMTcyNDhjZDA5M2FkOiUuSw==: 00:17:29.706 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTMzYzg5OGQyMTk2MDBkZjg2YzAzM2I1YzIyMmU4NzI3MzlkYTcwZTExNzYzMGRh1cPc5Q==: ]] 00:17:29.706 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTMzYzg5OGQyMTk2MDBkZjg2YzAzM2I1YzIyMmU4NzI3MzlkYTcwZTExNzYzMGRh1cPc5Q==: 00:17:29.706 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:17:29.706 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:29.706 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:29.706 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:29.706 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:29.706 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:29.706 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:29.706 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.706 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.706 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.706 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:29.706 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:29.706 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:29.706 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:29.706 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:29.706 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:29.706 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:29.706 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:29.706 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:29.706 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:29.706 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:29.706 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.706 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.706 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.276 nvme0n1 00:17:30.276 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.276 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:30.276 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.276 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:30.276 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.276 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.276 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.276 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:30.276 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.276 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.276 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.276 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:30.276 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:17:30.276 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:30.276 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:30.276 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:30.276 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:30.276 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTE5M2NiNzA4NmM1ZmFjMDY3ZWY4ZGNlYjljNjlkOWXxK34A: 00:17:30.276 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWE1ODQxZDk5MWY0NDJjZTllMWUzZWFiNjY3YzU2NzjSXE94: 00:17:30.276 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:30.276 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:30.276 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTE5M2NiNzA4NmM1ZmFjMDY3ZWY4ZGNlYjljNjlkOWXxK34A: 00:17:30.276 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWE1ODQxZDk5MWY0NDJjZTllMWUzZWFiNjY3YzU2NzjSXE94: ]] 00:17:30.276 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWE1ODQxZDk5MWY0NDJjZTllMWUzZWFiNjY3YzU2NzjSXE94: 00:17:30.276 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:17:30.276 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:30.276 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:30.276 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:30.276 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:30.276 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:30.276 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:30.276 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.276 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.276 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.276 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:30.276 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:30.276 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:30.276 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:30.276 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:30.276 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:30.276 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:30.276 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:30.276 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:30.276 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:30.276 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:30.276 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:30.276 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.276 10:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.604 nvme0n1 00:17:30.604 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.604 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:30.604 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:30.604 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.604 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.604 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.604 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.604 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:30.604 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.604 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.604 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.604 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:30.604 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:17:30.604 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:30.604 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:30.604 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:30.604 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:30.604 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzRiNTI4ZjU2NzdiODRkZjFmYjNhODk2ZTljYmYxMGViZjY5MmU4NTNjNzVjMmUyMMoqew==: 00:17:30.604 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzUyZTQ4YjA0MDA5OWVjZmEzY2UwMWZkODBmODgxNzdS0yi4: 00:17:30.604 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:30.604 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:30.604 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzRiNTI4ZjU2NzdiODRkZjFmYjNhODk2ZTljYmYxMGViZjY5MmU4NTNjNzVjMmUyMMoqew==: 00:17:30.604 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzUyZTQ4YjA0MDA5OWVjZmEzY2UwMWZkODBmODgxNzdS0yi4: ]] 00:17:30.605 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzUyZTQ4YjA0MDA5OWVjZmEzY2UwMWZkODBmODgxNzdS0yi4: 00:17:30.605 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:17:30.605 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:30.605 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:30.605 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:30.605 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:30.605 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:30.605 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:30.605 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.605 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.605 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.605 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:30.605 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:30.605 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:30.605 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:30.605 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:30.605 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:30.605 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:30.605 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:30.605 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:30.605 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:30.605 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:30.605 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:30.605 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.605 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.865 nvme0n1 00:17:30.865 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.865 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:30.865 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.865 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:30.865 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.125 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.125 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.125 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:31.125 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.125 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.125 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.125 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:31.125 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:17:31.125 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:31.125 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:31.125 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:31.125 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:31.125 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTIxZGJjODUxNjI3YWI0N2I1MjVhMGNmODhkMzIxOTVhOGZiMDRlYmZkNjJiMzUwY2JmMWQ1MWU5NjcwMzY1OHEnhLs=: 00:17:31.125 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:31.125 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:31.125 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:31.125 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTIxZGJjODUxNjI3YWI0N2I1MjVhMGNmODhkMzIxOTVhOGZiMDRlYmZkNjJiMzUwY2JmMWQ1MWU5NjcwMzY1OHEnhLs=: 00:17:31.125 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:31.125 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:17:31.125 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:31.125 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:31.125 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:31.125 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:31.125 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:31.125 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:31.125 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.125 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.125 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.125 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:31.125 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:31.125 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:31.125 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:31.125 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:31.125 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:31.125 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:31.125 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:31.125 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:31.125 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:31.125 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:31.125 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:31.125 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.125 10:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.384 nvme0n1 00:17:31.384 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.384 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:31.384 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:31.384 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.384 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.384 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.384 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.384 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:31.384 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.384 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.384 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.384 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:31.384 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:31.384 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:17:31.385 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:31.385 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:31.385 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:31.385 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:31.385 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzI5MjBlNWIyZDVhYmNkNDE5MTA2MTA3MDQ5MDYyY2Pxor+y: 00:17:31.385 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjEzYTE2OTc4NzI4YjJlYzVkYTM5ZjYzZjQ4ZWUxZjI2OTJkYWM4YTZmN2M1YjgyNGZhYjk5YTg0MTkxNGMzMNVgPYw=: 00:17:31.385 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:31.385 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:31.385 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzI5MjBlNWIyZDVhYmNkNDE5MTA2MTA3MDQ5MDYyY2Pxor+y: 00:17:31.385 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjEzYTE2OTc4NzI4YjJlYzVkYTM5ZjYzZjQ4ZWUxZjI2OTJkYWM4YTZmN2M1YjgyNGZhYjk5YTg0MTkxNGMzMNVgPYw=: ]] 00:17:31.385 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjEzYTE2OTc4NzI4YjJlYzVkYTM5ZjYzZjQ4ZWUxZjI2OTJkYWM4YTZmN2M1YjgyNGZhYjk5YTg0MTkxNGMzMNVgPYw=: 00:17:31.385 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:17:31.385 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:31.385 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:31.385 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:31.385 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:31.385 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:31.385 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:31.385 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.385 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.385 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.385 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:31.385 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:31.385 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:31.385 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:31.385 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:31.385 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:31.385 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:31.385 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:31.385 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:31.385 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:31.385 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:31.385 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.385 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.385 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.324 nvme0n1 00:17:32.324 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.324 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:32.324 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.324 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:32.324 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.324 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.324 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.324 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:32.324 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.324 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.324 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.324 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:32.324 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:17:32.324 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:32.324 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:32.324 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:32.324 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:32.324 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFkOGI4ODgzNTExNzBhOTc4MDAxMjE5NGI2YjJiZWI1YjQwMTcyNDhjZDA5M2FkOiUuSw==: 00:17:32.324 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTMzYzg5OGQyMTk2MDBkZjg2YzAzM2I1YzIyMmU4NzI3MzlkYTcwZTExNzYzMGRh1cPc5Q==: 00:17:32.324 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:32.324 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:32.324 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFkOGI4ODgzNTExNzBhOTc4MDAxMjE5NGI2YjJiZWI1YjQwMTcyNDhjZDA5M2FkOiUuSw==: 00:17:32.324 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTMzYzg5OGQyMTk2MDBkZjg2YzAzM2I1YzIyMmU4NzI3MzlkYTcwZTExNzYzMGRh1cPc5Q==: ]] 00:17:32.324 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTMzYzg5OGQyMTk2MDBkZjg2YzAzM2I1YzIyMmU4NzI3MzlkYTcwZTExNzYzMGRh1cPc5Q==: 00:17:32.324 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:17:32.324 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:32.324 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:32.324 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:32.324 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:32.324 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:32.324 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:32.324 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.324 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.324 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.325 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:32.325 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:32.325 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:32.325 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:32.325 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:32.325 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:32.325 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:32.325 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:32.325 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:32.325 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:32.325 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:32.325 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.325 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.325 10:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.909 nvme0n1 00:17:32.909 10:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.909 10:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:32.909 10:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:32.909 10:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.909 10:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.909 10:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.909 10:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.909 10:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:32.909 10:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.909 10:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.909 10:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.909 10:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:32.909 10:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:17:32.909 10:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:32.909 10:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:32.909 10:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:32.909 10:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:32.909 10:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTE5M2NiNzA4NmM1ZmFjMDY3ZWY4ZGNlYjljNjlkOWXxK34A: 00:17:32.909 10:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWE1ODQxZDk5MWY0NDJjZTllMWUzZWFiNjY3YzU2NzjSXE94: 00:17:32.909 10:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:32.909 10:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:32.909 10:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTE5M2NiNzA4NmM1ZmFjMDY3ZWY4ZGNlYjljNjlkOWXxK34A: 00:17:32.909 10:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWE1ODQxZDk5MWY0NDJjZTllMWUzZWFiNjY3YzU2NzjSXE94: ]] 00:17:32.909 10:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWE1ODQxZDk5MWY0NDJjZTllMWUzZWFiNjY3YzU2NzjSXE94: 00:17:32.909 10:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:17:32.909 10:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:32.909 10:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:32.909 10:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:32.909 10:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:32.909 10:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:32.909 10:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:32.909 10:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.909 10:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.909 10:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.909 10:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:32.909 10:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:32.909 10:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:32.909 10:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:32.909 10:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:32.909 10:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:32.909 10:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:32.909 10:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:32.909 10:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:32.909 10:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:32.909 10:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:32.909 10:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.909 10:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.909 10:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.496 nvme0n1 00:17:33.496 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.496 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:33.496 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:33.496 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.496 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.496 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.496 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.496 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:33.496 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.496 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.496 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.496 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:33.496 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:17:33.496 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:33.496 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:33.496 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:33.496 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:33.496 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzRiNTI4ZjU2NzdiODRkZjFmYjNhODk2ZTljYmYxMGViZjY5MmU4NTNjNzVjMmUyMMoqew==: 00:17:33.496 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzUyZTQ4YjA0MDA5OWVjZmEzY2UwMWZkODBmODgxNzdS0yi4: 00:17:33.496 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:33.496 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:33.496 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzRiNTI4ZjU2NzdiODRkZjFmYjNhODk2ZTljYmYxMGViZjY5MmU4NTNjNzVjMmUyMMoqew==: 00:17:33.496 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzUyZTQ4YjA0MDA5OWVjZmEzY2UwMWZkODBmODgxNzdS0yi4: ]] 00:17:33.496 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzUyZTQ4YjA0MDA5OWVjZmEzY2UwMWZkODBmODgxNzdS0yi4: 00:17:33.496 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:17:33.496 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:33.496 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:33.496 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:33.496 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:33.496 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:33.496 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:33.496 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.496 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.496 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.496 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:33.496 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:33.496 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:33.496 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:33.496 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:33.496 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:33.496 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:33.496 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:33.496 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:33.496 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:33.496 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:33.496 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:33.496 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.496 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.062 nvme0n1 00:17:34.062 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.062 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:34.062 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:34.062 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.062 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.062 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.321 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.321 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:34.321 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.321 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.321 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.321 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:34.321 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:17:34.321 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:34.321 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:34.321 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:34.321 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:34.321 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTIxZGJjODUxNjI3YWI0N2I1MjVhMGNmODhkMzIxOTVhOGZiMDRlYmZkNjJiMzUwY2JmMWQ1MWU5NjcwMzY1OHEnhLs=: 00:17:34.321 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:34.321 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:34.321 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:34.321 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTIxZGJjODUxNjI3YWI0N2I1MjVhMGNmODhkMzIxOTVhOGZiMDRlYmZkNjJiMzUwY2JmMWQ1MWU5NjcwMzY1OHEnhLs=: 00:17:34.321 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:34.321 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:17:34.321 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:34.321 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:34.321 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:34.321 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:34.321 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:34.321 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:34.321 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.321 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.321 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.321 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:34.321 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:34.322 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:34.322 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:34.322 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:34.322 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:34.322 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:34.322 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:34.322 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:34.322 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:34.322 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:34.322 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:34.322 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.322 10:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.891 nvme0n1 00:17:34.891 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.891 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:34.891 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:34.891 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.891 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.891 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.891 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.891 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:34.891 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.892 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.892 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.892 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:34.892 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:34.892 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:34.892 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:34.892 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:34.892 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFkOGI4ODgzNTExNzBhOTc4MDAxMjE5NGI2YjJiZWI1YjQwMTcyNDhjZDA5M2FkOiUuSw==: 00:17:34.892 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTMzYzg5OGQyMTk2MDBkZjg2YzAzM2I1YzIyMmU4NzI3MzlkYTcwZTExNzYzMGRh1cPc5Q==: 00:17:34.892 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:34.892 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:34.892 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFkOGI4ODgzNTExNzBhOTc4MDAxMjE5NGI2YjJiZWI1YjQwMTcyNDhjZDA5M2FkOiUuSw==: 00:17:34.892 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTMzYzg5OGQyMTk2MDBkZjg2YzAzM2I1YzIyMmU4NzI3MzlkYTcwZTExNzYzMGRh1cPc5Q==: ]] 00:17:34.892 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTMzYzg5OGQyMTk2MDBkZjg2YzAzM2I1YzIyMmU4NzI3MzlkYTcwZTExNzYzMGRh1cPc5Q==: 00:17:34.892 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:34.892 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.892 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.892 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.892 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:17:34.892 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:34.892 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:34.892 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:34.892 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:34.892 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:34.892 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:34.892 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:34.892 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:34.892 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:34.892 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:34.892 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:34.892 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:17:34.892 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:34.892 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:34.892 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:34.892 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:34.892 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:34.892 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:34.892 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.892 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.892 request: 00:17:34.892 { 00:17:34.892 "name": "nvme0", 00:17:34.892 "trtype": "tcp", 00:17:34.892 "traddr": "10.0.0.1", 00:17:34.892 "adrfam": "ipv4", 00:17:34.892 "trsvcid": "4420", 00:17:34.892 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:34.892 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:34.892 "prchk_reftag": false, 00:17:34.892 "prchk_guard": false, 00:17:34.892 "hdgst": false, 00:17:34.892 "ddgst": false, 00:17:34.892 "allow_unrecognized_csi": false, 00:17:34.892 "method": "bdev_nvme_attach_controller", 00:17:34.892 "req_id": 1 00:17:34.892 } 00:17:34.892 Got JSON-RPC error response 00:17:34.892 response: 00:17:34.892 { 00:17:34.892 "code": -5, 00:17:34.892 "message": "Input/output error" 00:17:34.892 } 00:17:34.892 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:34.892 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:17:34.892 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:34.892 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:34.892 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:34.892 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:17:34.892 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:17:34.892 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.892 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.892 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.892 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:17:34.892 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:17:34.892 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:34.892 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:34.892 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:34.892 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:34.892 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:34.892 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:34.892 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:34.892 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:34.892 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:34.892 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:34.892 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:34.892 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:17:34.892 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:34.892 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:34.892 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:34.892 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:34.892 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:34.892 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:34.892 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.892 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.153 request: 00:17:35.153 { 00:17:35.153 "name": "nvme0", 00:17:35.153 "trtype": "tcp", 00:17:35.153 "traddr": "10.0.0.1", 00:17:35.153 "adrfam": "ipv4", 00:17:35.153 "trsvcid": "4420", 00:17:35.153 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:35.153 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:35.153 "prchk_reftag": false, 00:17:35.153 "prchk_guard": false, 00:17:35.153 "hdgst": false, 00:17:35.153 "ddgst": false, 00:17:35.153 "dhchap_key": "key2", 00:17:35.153 "allow_unrecognized_csi": false, 00:17:35.153 "method": "bdev_nvme_attach_controller", 00:17:35.153 "req_id": 1 00:17:35.153 } 00:17:35.153 Got JSON-RPC error response 00:17:35.153 response: 00:17:35.153 { 00:17:35.153 "code": -5, 00:17:35.153 "message": "Input/output error" 00:17:35.153 } 00:17:35.153 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:35.153 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:17:35.153 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:35.153 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:35.153 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:35.153 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:17:35.153 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:17:35.153 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.153 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.153 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.153 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:17:35.153 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:17:35.153 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:35.153 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:35.153 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:35.153 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:35.153 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:35.153 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:35.153 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:35.153 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:35.153 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:35.153 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:35.153 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:35.153 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:17:35.153 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:35.153 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:35.153 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:35.153 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:35.153 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:35.153 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:35.153 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.153 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.153 request: 00:17:35.153 { 00:17:35.153 "name": "nvme0", 00:17:35.153 "trtype": "tcp", 00:17:35.153 "traddr": "10.0.0.1", 00:17:35.153 "adrfam": "ipv4", 00:17:35.153 "trsvcid": "4420", 00:17:35.153 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:35.153 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:35.153 "prchk_reftag": false, 00:17:35.153 "prchk_guard": false, 00:17:35.153 "hdgst": false, 00:17:35.153 "ddgst": false, 00:17:35.153 "dhchap_key": "key1", 00:17:35.153 "dhchap_ctrlr_key": "ckey2", 00:17:35.153 "allow_unrecognized_csi": false, 00:17:35.153 "method": "bdev_nvme_attach_controller", 00:17:35.153 "req_id": 1 00:17:35.153 } 00:17:35.153 Got JSON-RPC error response 00:17:35.153 response: 00:17:35.153 { 00:17:35.153 "code": -5, 00:17:35.153 "message": "Input/output error" 00:17:35.153 } 00:17:35.153 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:35.153 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:17:35.153 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:35.153 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:35.153 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:35.153 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:17:35.153 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:35.153 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:35.153 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:35.153 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:35.153 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:35.153 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:35.154 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:35.154 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:35.154 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:35.154 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:35.154 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:35.154 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.154 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.154 nvme0n1 00:17:35.154 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.154 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:17:35.154 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:35.154 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:35.154 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:35.154 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:35.154 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTE5M2NiNzA4NmM1ZmFjMDY3ZWY4ZGNlYjljNjlkOWXxK34A: 00:17:35.154 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWE1ODQxZDk5MWY0NDJjZTllMWUzZWFiNjY3YzU2NzjSXE94: 00:17:35.154 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:35.154 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:35.154 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTE5M2NiNzA4NmM1ZmFjMDY3ZWY4ZGNlYjljNjlkOWXxK34A: 00:17:35.154 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWE1ODQxZDk5MWY0NDJjZTllMWUzZWFiNjY3YzU2NzjSXE94: ]] 00:17:35.154 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWE1ODQxZDk5MWY0NDJjZTllMWUzZWFiNjY3YzU2NzjSXE94: 00:17:35.154 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.154 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.154 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.154 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.154 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:17:35.154 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.154 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.154 10:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:17:35.413 10:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.413 10:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.413 10:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:35.413 10:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:17:35.413 10:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:35.413 10:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:35.413 10:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:35.413 10:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:35.413 10:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:35.413 10:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:35.413 10:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.413 10:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.413 request: 00:17:35.413 { 00:17:35.413 "name": "nvme0", 00:17:35.413 "dhchap_key": "key1", 00:17:35.413 "dhchap_ctrlr_key": "ckey2", 00:17:35.413 "method": "bdev_nvme_set_keys", 00:17:35.413 "req_id": 1 00:17:35.413 } 00:17:35.413 Got JSON-RPC error response 00:17:35.413 response: 00:17:35.413 { 00:17:35.413 "code": -13, 00:17:35.413 "message": "Permission denied" 00:17:35.413 } 00:17:35.413 10:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:35.413 10:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:17:35.413 10:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:35.413 10:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:35.413 10:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:35.413 10:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:17:35.413 10:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:17:35.413 10:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.413 10:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.413 10:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.413 10:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:17:35.413 10:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:17:36.349 10:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:17:36.349 10:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:17:36.349 10:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.349 10:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.349 10:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.349 10:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:17:36.349 10:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:36.349 10:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:36.349 10:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:36.349 10:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:36.349 10:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:36.349 10:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFkOGI4ODgzNTExNzBhOTc4MDAxMjE5NGI2YjJiZWI1YjQwMTcyNDhjZDA5M2FkOiUuSw==: 00:17:36.349 10:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTMzYzg5OGQyMTk2MDBkZjg2YzAzM2I1YzIyMmU4NzI3MzlkYTcwZTExNzYzMGRh1cPc5Q==: 00:17:36.349 10:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:36.349 10:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:36.349 10:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFkOGI4ODgzNTExNzBhOTc4MDAxMjE5NGI2YjJiZWI1YjQwMTcyNDhjZDA5M2FkOiUuSw==: 00:17:36.349 10:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTMzYzg5OGQyMTk2MDBkZjg2YzAzM2I1YzIyMmU4NzI3MzlkYTcwZTExNzYzMGRh1cPc5Q==: ]] 00:17:36.349 10:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTMzYzg5OGQyMTk2MDBkZjg2YzAzM2I1YzIyMmU4NzI3MzlkYTcwZTExNzYzMGRh1cPc5Q==: 00:17:36.609 10:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:17:36.609 10:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:36.609 10:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:36.609 10:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:36.609 10:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:36.609 10:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:36.609 10:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:36.609 10:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:36.609 10:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:36.609 10:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:36.609 10:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:36.609 10:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:36.609 10:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.609 10:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.609 nvme0n1 00:17:36.609 10:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.609 10:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:17:36.609 10:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:36.609 10:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:36.609 10:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:36.609 10:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:36.609 10:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTE5M2NiNzA4NmM1ZmFjMDY3ZWY4ZGNlYjljNjlkOWXxK34A: 00:17:36.609 10:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWE1ODQxZDk5MWY0NDJjZTllMWUzZWFiNjY3YzU2NzjSXE94: 00:17:36.609 10:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:36.609 10:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:36.609 10:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTE5M2NiNzA4NmM1ZmFjMDY3ZWY4ZGNlYjljNjlkOWXxK34A: 00:17:36.609 10:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWE1ODQxZDk5MWY0NDJjZTllMWUzZWFiNjY3YzU2NzjSXE94: ]] 00:17:36.609 10:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWE1ODQxZDk5MWY0NDJjZTllMWUzZWFiNjY3YzU2NzjSXE94: 00:17:36.609 10:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:17:36.609 10:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:17:36.609 10:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:17:36.609 10:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:36.609 10:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:36.609 10:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:36.609 10:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:36.609 10:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:17:36.609 10:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.609 10:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.609 request: 00:17:36.609 { 00:17:36.609 "name": "nvme0", 00:17:36.609 "dhchap_key": "key2", 00:17:36.609 "dhchap_ctrlr_key": "ckey1", 00:17:36.609 "method": "bdev_nvme_set_keys", 00:17:36.609 "req_id": 1 00:17:36.609 } 00:17:36.609 Got JSON-RPC error response 00:17:36.609 response: 00:17:36.609 { 00:17:36.609 "code": -13, 00:17:36.609 "message": "Permission denied" 00:17:36.609 } 00:17:36.609 10:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:36.609 10:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:17:36.609 10:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:36.609 10:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:36.609 10:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:36.609 10:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:17:36.609 10:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.609 10:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.609 10:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:17:36.609 10:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.609 10:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:17:36.609 10:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:17:37.986 10:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:17:37.986 10:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:17:37.986 10:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.987 10:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.987 10:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.987 10:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:17:37.987 10:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:17:37.987 10:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:17:37.987 10:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:17:37.987 10:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:37.987 10:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:17:37.987 10:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:37.987 10:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:17:37.987 10:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:37.987 10:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:37.987 rmmod nvme_tcp 00:17:37.987 rmmod nvme_fabrics 00:17:37.987 10:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:37.987 10:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:17:37.987 10:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:17:37.987 10:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 78470 ']' 00:17:37.987 10:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 78470 00:17:37.987 10:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # '[' -z 78470 ']' 00:17:37.987 10:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # kill -0 78470 00:17:37.987 10:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # uname 00:17:37.987 10:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:37.987 10:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 78470 00:17:37.987 killing process with pid 78470 00:17:37.987 10:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:37.987 10:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:37.987 10:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 78470' 00:17:37.987 10:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@971 -- # kill 78470 00:17:37.987 10:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@976 -- # wait 78470 00:17:37.987 10:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:37.987 10:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:37.987 10:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:37.987 10:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:17:37.987 10:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:17:37.987 10:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:37.987 10:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:17:37.987 10:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:37.987 10:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:37.987 10:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:37.987 10:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:37.987 10:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:37.987 10:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:37.987 10:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:37.987 10:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:37.987 10:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:37.987 10:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:37.987 10:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:38.246 10:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:38.246 10:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:38.246 10:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:38.246 10:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:38.246 10:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:38.246 10:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:38.246 10:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:38.246 10:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:38.246 10:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:17:38.246 10:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:17:38.246 10:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:38.246 10:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:17:38.246 10:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:17:38.246 10:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:17:38.246 10:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:38.246 10:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:38.247 10:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:17:38.247 10:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:38.247 10:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:17:38.247 10:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:17:38.247 10:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:38.824 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:39.082 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:39.082 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:39.082 10:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Bgj /tmp/spdk.key-null.gzz /tmp/spdk.key-sha256.gLC /tmp/spdk.key-sha384.dDq /tmp/spdk.key-sha512.UEQ /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:17:39.082 10:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:39.341 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:39.600 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:39.600 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:39.600 00:17:39.600 real 0m37.891s 00:17:39.600 user 0m34.303s 00:17:39.600 sys 0m3.935s 00:17:39.600 10:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:39.600 ************************************ 00:17:39.600 END TEST nvmf_auth_host 00:17:39.600 ************************************ 00:17:39.600 10:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.600 10:35:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:17:39.600 10:35:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:39.600 10:35:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:39.600 10:35:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:39.600 10:35:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.600 ************************************ 00:17:39.600 START TEST nvmf_digest 00:17:39.600 ************************************ 00:17:39.600 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:39.600 * Looking for test storage... 00:17:39.600 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:39.600 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:39.600 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lcov --version 00:17:39.600 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:39.860 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:39.860 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:39.860 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:39.860 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:39.860 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:17:39.860 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:17:39.860 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:17:39.860 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:17:39.860 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:17:39.860 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:17:39.860 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:17:39.860 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:39.860 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:17:39.860 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:17:39.860 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:39.860 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:39.860 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:17:39.860 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:17:39.860 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:39.860 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:17:39.860 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:17:39.860 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:17:39.860 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:17:39.860 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:39.860 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:17:39.860 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:17:39.860 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:39.860 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:39.860 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:17:39.860 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:39.860 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:39.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:39.860 --rc genhtml_branch_coverage=1 00:17:39.860 --rc genhtml_function_coverage=1 00:17:39.860 --rc genhtml_legend=1 00:17:39.860 --rc geninfo_all_blocks=1 00:17:39.860 --rc geninfo_unexecuted_blocks=1 00:17:39.860 00:17:39.860 ' 00:17:39.860 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:39.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:39.860 --rc genhtml_branch_coverage=1 00:17:39.860 --rc genhtml_function_coverage=1 00:17:39.860 --rc genhtml_legend=1 00:17:39.860 --rc geninfo_all_blocks=1 00:17:39.860 --rc geninfo_unexecuted_blocks=1 00:17:39.860 00:17:39.860 ' 00:17:39.860 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:39.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:39.860 --rc genhtml_branch_coverage=1 00:17:39.860 --rc genhtml_function_coverage=1 00:17:39.860 --rc genhtml_legend=1 00:17:39.860 --rc geninfo_all_blocks=1 00:17:39.860 --rc geninfo_unexecuted_blocks=1 00:17:39.860 00:17:39.860 ' 00:17:39.860 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:39.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:39.860 --rc genhtml_branch_coverage=1 00:17:39.860 --rc genhtml_function_coverage=1 00:17:39.860 --rc genhtml_legend=1 00:17:39.860 --rc geninfo_all_blocks=1 00:17:39.860 --rc geninfo_unexecuted_blocks=1 00:17:39.860 00:17:39.860 ' 00:17:39.860 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:39.860 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:17:39.860 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:39.860 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:39.860 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:39.860 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:39.860 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:39.860 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:39.860 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:39.860 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:39.861 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:39.861 Cannot find device "nvmf_init_br" 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:39.861 Cannot find device "nvmf_init_br2" 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:39.861 Cannot find device "nvmf_tgt_br" 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:39.861 Cannot find device "nvmf_tgt_br2" 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:39.861 Cannot find device "nvmf_init_br" 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:39.861 Cannot find device "nvmf_init_br2" 00:17:39.861 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:17:39.862 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:39.862 Cannot find device "nvmf_tgt_br" 00:17:39.862 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:17:39.862 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:39.862 Cannot find device "nvmf_tgt_br2" 00:17:39.862 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:17:39.862 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:39.862 Cannot find device "nvmf_br" 00:17:39.862 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:17:39.862 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:39.862 Cannot find device "nvmf_init_if" 00:17:39.862 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:17:39.862 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:39.862 Cannot find device "nvmf_init_if2" 00:17:39.862 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:17:39.862 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:39.862 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:39.862 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:17:39.862 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:39.862 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:39.862 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:17:39.862 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:39.862 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:39.862 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:39.862 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:39.862 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:40.121 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:40.121 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:40.121 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:40.121 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:40.121 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:40.121 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:40.121 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:40.121 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:40.121 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:40.121 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:40.121 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:40.121 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:40.121 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:40.121 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:40.121 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:40.121 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:40.121 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:40.121 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:40.121 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:40.121 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:40.121 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:40.121 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:40.121 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:40.122 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:40.122 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:40.122 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:40.122 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:40.122 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:40.122 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:40.122 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.097 ms 00:17:40.122 00:17:40.122 --- 10.0.0.3 ping statistics --- 00:17:40.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:40.122 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:17:40.122 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:40.122 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:40.122 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.088 ms 00:17:40.122 00:17:40.122 --- 10.0.0.4 ping statistics --- 00:17:40.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:40.122 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:17:40.122 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:40.122 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:40.122 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:17:40.122 00:17:40.122 --- 10.0.0.1 ping statistics --- 00:17:40.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:40.122 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:17:40.122 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:40.122 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:40.122 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:17:40.122 00:17:40.122 --- 10.0.0.2 ping statistics --- 00:17:40.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:40.122 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:17:40.122 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:40.122 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:17:40.122 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:40.122 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:40.122 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:40.122 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:40.122 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:40.122 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:40.122 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:40.122 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:40.122 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:17:40.122 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:17:40.122 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:17:40.122 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:40.122 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:17:40.122 ************************************ 00:17:40.122 START TEST nvmf_digest_clean 00:17:40.122 ************************************ 00:17:40.122 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1127 -- # run_digest 00:17:40.122 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:17:40.122 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:17:40.122 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:17:40.122 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:17:40.122 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:17:40.122 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:40.122 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:40.122 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:40.122 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=80124 00:17:40.122 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:40.122 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 80124 00:17:40.122 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 80124 ']' 00:17:40.122 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:40.122 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:40.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:40.122 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:40.122 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:40.122 10:35:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:40.381 [2024-11-15 10:35:41.029114] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:17:40.381 [2024-11-15 10:35:41.029212] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:40.381 [2024-11-15 10:35:41.183891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.639 [2024-11-15 10:35:41.253110] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:40.639 [2024-11-15 10:35:41.253186] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:40.639 [2024-11-15 10:35:41.253201] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:40.639 [2024-11-15 10:35:41.253211] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:40.639 [2024-11-15 10:35:41.253221] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:40.640 [2024-11-15 10:35:41.253688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:41.208 10:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:41.208 10:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:17:41.208 10:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:41.208 10:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:41.208 10:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:41.208 10:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:41.208 10:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:17:41.208 10:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:17:41.208 10:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:17:41.208 10:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.208 10:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:41.467 [2024-11-15 10:35:42.116287] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:41.467 null0 00:17:41.467 [2024-11-15 10:35:42.172844] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:41.467 [2024-11-15 10:35:42.197083] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:41.467 10:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.467 10:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:17:41.467 10:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:41.467 10:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:41.467 10:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:17:41.467 10:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:17:41.467 10:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:17:41.467 10:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:41.467 10:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80156 00:17:41.467 10:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:41.467 10:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80156 /var/tmp/bperf.sock 00:17:41.467 10:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 80156 ']' 00:17:41.467 10:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:41.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:41.467 10:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:41.467 10:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:41.467 10:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:41.467 10:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:41.467 [2024-11-15 10:35:42.282123] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:17:41.467 [2024-11-15 10:35:42.282286] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80156 ] 00:17:41.726 [2024-11-15 10:35:42.439095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.726 [2024-11-15 10:35:42.501930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:42.670 10:35:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:42.670 10:35:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:17:42.670 10:35:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:42.670 10:35:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:42.670 10:35:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:42.942 [2024-11-15 10:35:43.663341] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:42.942 10:35:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:42.942 10:35:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:43.201 nvme0n1 00:17:43.201 10:35:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:43.201 10:35:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:43.459 Running I/O for 2 seconds... 00:17:45.332 14478.00 IOPS, 56.55 MiB/s [2024-11-15T10:35:46.444Z] 14605.00 IOPS, 57.05 MiB/s 00:17:45.591 Latency(us) 00:17:45.591 [2024-11-15T10:35:46.444Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.591 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:17:45.591 nvme0n1 : 2.01 14637.14 57.18 0.00 0.00 8738.14 8340.95 25022.84 00:17:45.591 [2024-11-15T10:35:46.444Z] =================================================================================================================== 00:17:45.591 [2024-11-15T10:35:46.444Z] Total : 14637.14 57.18 0.00 0.00 8738.14 8340.95 25022.84 00:17:45.591 { 00:17:45.591 "results": [ 00:17:45.591 { 00:17:45.591 "job": "nvme0n1", 00:17:45.591 "core_mask": "0x2", 00:17:45.591 "workload": "randread", 00:17:45.591 "status": "finished", 00:17:45.591 "queue_depth": 128, 00:17:45.591 "io_size": 4096, 00:17:45.591 "runtime": 2.01303, 00:17:45.591 "iops": 14637.139039159874, 00:17:45.591 "mibps": 57.17632437171826, 00:17:45.591 "io_failed": 0, 00:17:45.591 "io_timeout": 0, 00:17:45.591 "avg_latency_us": 8738.143780324885, 00:17:45.591 "min_latency_us": 8340.945454545454, 00:17:45.591 "max_latency_us": 25022.836363636365 00:17:45.591 } 00:17:45.591 ], 00:17:45.591 "core_count": 1 00:17:45.591 } 00:17:45.591 10:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:45.591 10:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:45.591 10:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:45.591 | select(.opcode=="crc32c") 00:17:45.591 | "\(.module_name) \(.executed)"' 00:17:45.591 10:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:45.591 10:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:45.920 10:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:45.920 10:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:45.920 10:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:45.920 10:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:45.920 10:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80156 00:17:45.920 10:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 80156 ']' 00:17:45.920 10:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 80156 00:17:45.920 10:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:17:45.920 10:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:45.920 10:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80156 00:17:45.920 killing process with pid 80156 00:17:45.920 Received shutdown signal, test time was about 2.000000 seconds 00:17:45.920 00:17:45.920 Latency(us) 00:17:45.920 [2024-11-15T10:35:46.773Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.920 [2024-11-15T10:35:46.773Z] =================================================================================================================== 00:17:45.920 [2024-11-15T10:35:46.773Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:45.920 10:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:17:45.920 10:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:17:45.920 10:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80156' 00:17:45.920 10:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 80156 00:17:45.920 10:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 80156 00:17:45.920 10:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:17:45.920 10:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:45.920 10:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:45.920 10:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:17:45.920 10:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:17:45.920 10:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:17:45.920 10:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:45.920 10:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80218 00:17:45.920 10:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:17:45.920 10:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80218 /var/tmp/bperf.sock 00:17:45.920 10:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 80218 ']' 00:17:45.920 10:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:45.920 10:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:45.920 10:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:45.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:45.920 10:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:45.920 10:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:46.179 [2024-11-15 10:35:46.811878] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:17:46.179 [2024-11-15 10:35:46.812255] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80218 ] 00:17:46.179 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:46.179 Zero copy mechanism will not be used. 00:17:46.179 [2024-11-15 10:35:46.957706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.179 [2024-11-15 10:35:47.021028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:47.205 10:35:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:47.205 10:35:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:17:47.205 10:35:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:47.205 10:35:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:47.205 10:35:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:47.464 [2024-11-15 10:35:48.209615] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:47.464 10:35:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:47.464 10:35:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:48.031 nvme0n1 00:17:48.031 10:35:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:48.031 10:35:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:48.031 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:48.031 Zero copy mechanism will not be used. 00:17:48.031 Running I/O for 2 seconds... 00:17:50.346 7376.00 IOPS, 922.00 MiB/s [2024-11-15T10:35:51.199Z] 7448.00 IOPS, 931.00 MiB/s 00:17:50.346 Latency(us) 00:17:50.346 [2024-11-15T10:35:51.199Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.346 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:17:50.346 nvme0n1 : 2.00 7449.44 931.18 0.00 0.00 2144.29 1906.50 9830.40 00:17:50.346 [2024-11-15T10:35:51.199Z] =================================================================================================================== 00:17:50.346 [2024-11-15T10:35:51.199Z] Total : 7449.44 931.18 0.00 0.00 2144.29 1906.50 9830.40 00:17:50.346 { 00:17:50.346 "results": [ 00:17:50.346 { 00:17:50.346 "job": "nvme0n1", 00:17:50.346 "core_mask": "0x2", 00:17:50.346 "workload": "randread", 00:17:50.346 "status": "finished", 00:17:50.346 "queue_depth": 16, 00:17:50.346 "io_size": 131072, 00:17:50.346 "runtime": 2.00391, 00:17:50.346 "iops": 7449.436351931973, 00:17:50.346 "mibps": 931.1795439914966, 00:17:50.346 "io_failed": 0, 00:17:50.346 "io_timeout": 0, 00:17:50.346 "avg_latency_us": 2144.2947130468674, 00:17:50.346 "min_latency_us": 1906.5018181818182, 00:17:50.346 "max_latency_us": 9830.4 00:17:50.346 } 00:17:50.346 ], 00:17:50.346 "core_count": 1 00:17:50.346 } 00:17:50.346 10:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:50.346 10:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:50.346 10:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:50.346 10:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:50.346 | select(.opcode=="crc32c") 00:17:50.346 | "\(.module_name) \(.executed)"' 00:17:50.346 10:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:50.346 10:35:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:50.346 10:35:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:50.346 10:35:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:50.346 10:35:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:50.346 10:35:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80218 00:17:50.346 10:35:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 80218 ']' 00:17:50.346 10:35:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 80218 00:17:50.346 10:35:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:17:50.346 10:35:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:50.346 10:35:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80218 00:17:50.346 killing process with pid 80218 00:17:50.346 Received shutdown signal, test time was about 2.000000 seconds 00:17:50.346 00:17:50.346 Latency(us) 00:17:50.346 [2024-11-15T10:35:51.199Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.346 [2024-11-15T10:35:51.199Z] =================================================================================================================== 00:17:50.346 [2024-11-15T10:35:51.199Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:50.346 10:35:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:17:50.346 10:35:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:17:50.346 10:35:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80218' 00:17:50.346 10:35:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 80218 00:17:50.346 10:35:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 80218 00:17:50.606 10:35:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:17:50.606 10:35:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:50.606 10:35:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:50.606 10:35:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:17:50.606 10:35:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:17:50.606 10:35:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:17:50.606 10:35:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:50.606 10:35:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80279 00:17:50.606 10:35:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:50.606 10:35:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80279 /var/tmp/bperf.sock 00:17:50.606 10:35:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 80279 ']' 00:17:50.606 10:35:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:50.606 10:35:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:50.606 10:35:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:50.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:50.606 10:35:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:50.606 10:35:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:50.606 [2024-11-15 10:35:51.400940] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:17:50.606 [2024-11-15 10:35:51.401363] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80279 ] 00:17:50.865 [2024-11-15 10:35:51.544018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.865 [2024-11-15 10:35:51.603800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:50.865 10:35:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:50.865 10:35:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:17:50.865 10:35:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:50.865 10:35:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:50.865 10:35:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:51.432 [2024-11-15 10:35:52.019347] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:51.432 10:35:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:51.432 10:35:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:51.690 nvme0n1 00:17:51.690 10:35:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:51.690 10:35:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:51.690 Running I/O for 2 seconds... 00:17:54.001 15622.00 IOPS, 61.02 MiB/s [2024-11-15T10:35:54.854Z] 15685.00 IOPS, 61.27 MiB/s 00:17:54.001 Latency(us) 00:17:54.001 [2024-11-15T10:35:54.854Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:54.001 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:54.001 nvme0n1 : 2.01 15673.68 61.23 0.00 0.00 8159.71 7864.32 16443.58 00:17:54.001 [2024-11-15T10:35:54.854Z] =================================================================================================================== 00:17:54.001 [2024-11-15T10:35:54.854Z] Total : 15673.68 61.23 0.00 0.00 8159.71 7864.32 16443.58 00:17:54.001 { 00:17:54.001 "results": [ 00:17:54.001 { 00:17:54.001 "job": "nvme0n1", 00:17:54.001 "core_mask": "0x2", 00:17:54.001 "workload": "randwrite", 00:17:54.001 "status": "finished", 00:17:54.001 "queue_depth": 128, 00:17:54.001 "io_size": 4096, 00:17:54.001 "runtime": 2.009611, 00:17:54.001 "iops": 15673.680130134639, 00:17:54.001 "mibps": 61.22531300833843, 00:17:54.001 "io_failed": 0, 00:17:54.001 "io_timeout": 0, 00:17:54.001 "avg_latency_us": 8159.7084665693055, 00:17:54.001 "min_latency_us": 7864.32, 00:17:54.001 "max_latency_us": 16443.578181818182 00:17:54.001 } 00:17:54.001 ], 00:17:54.001 "core_count": 1 00:17:54.001 } 00:17:54.001 10:35:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:54.001 10:35:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:54.001 10:35:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:54.001 10:35:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:54.001 | select(.opcode=="crc32c") 00:17:54.001 | "\(.module_name) \(.executed)"' 00:17:54.001 10:35:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:54.259 10:35:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:54.259 10:35:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:54.259 10:35:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:54.259 10:35:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:54.259 10:35:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80279 00:17:54.259 10:35:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 80279 ']' 00:17:54.259 10:35:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 80279 00:17:54.259 10:35:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:17:54.259 10:35:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:54.259 10:35:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80279 00:17:54.259 killing process with pid 80279 00:17:54.259 Received shutdown signal, test time was about 2.000000 seconds 00:17:54.259 00:17:54.259 Latency(us) 00:17:54.259 [2024-11-15T10:35:55.112Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:54.259 [2024-11-15T10:35:55.112Z] =================================================================================================================== 00:17:54.259 [2024-11-15T10:35:55.112Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:54.259 10:35:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:17:54.259 10:35:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:17:54.259 10:35:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80279' 00:17:54.259 10:35:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 80279 00:17:54.259 10:35:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 80279 00:17:54.518 10:35:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:17:54.518 10:35:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:54.518 10:35:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:54.518 10:35:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:17:54.518 10:35:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:17:54.518 10:35:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:17:54.518 10:35:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:54.518 10:35:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80332 00:17:54.518 10:35:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80332 /var/tmp/bperf.sock 00:17:54.518 10:35:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:17:54.518 10:35:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 80332 ']' 00:17:54.518 10:35:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:54.518 10:35:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:54.518 10:35:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:54.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:54.518 10:35:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:54.518 10:35:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:54.518 [2024-11-15 10:35:55.197005] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:17:54.518 [2024-11-15 10:35:55.197436] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80332 ] 00:17:54.518 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:54.518 Zero copy mechanism will not be used. 00:17:54.518 [2024-11-15 10:35:55.346618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.777 [2024-11-15 10:35:55.410364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:55.376 10:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:55.376 10:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:17:55.376 10:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:55.376 10:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:55.376 10:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:55.944 [2024-11-15 10:35:56.487301] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:55.944 10:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:55.944 10:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:56.204 nvme0n1 00:17:56.204 10:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:56.204 10:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:56.204 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:56.204 Zero copy mechanism will not be used. 00:17:56.204 Running I/O for 2 seconds... 00:17:58.518 6416.00 IOPS, 802.00 MiB/s [2024-11-15T10:35:59.371Z] 6422.50 IOPS, 802.81 MiB/s 00:17:58.518 Latency(us) 00:17:58.518 [2024-11-15T10:35:59.371Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:58.518 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:17:58.518 nvme0n1 : 2.00 6418.89 802.36 0.00 0.00 2486.77 2159.71 6732.33 00:17:58.518 [2024-11-15T10:35:59.371Z] =================================================================================================================== 00:17:58.518 [2024-11-15T10:35:59.371Z] Total : 6418.89 802.36 0.00 0.00 2486.77 2159.71 6732.33 00:17:58.518 { 00:17:58.518 "results": [ 00:17:58.518 { 00:17:58.518 "job": "nvme0n1", 00:17:58.518 "core_mask": "0x2", 00:17:58.518 "workload": "randwrite", 00:17:58.518 "status": "finished", 00:17:58.518 "queue_depth": 16, 00:17:58.518 "io_size": 131072, 00:17:58.518 "runtime": 2.003461, 00:17:58.518 "iops": 6418.892107208476, 00:17:58.518 "mibps": 802.3615134010595, 00:17:58.518 "io_failed": 0, 00:17:58.518 "io_timeout": 0, 00:17:58.518 "avg_latency_us": 2486.7694500212074, 00:17:58.518 "min_latency_us": 2159.7090909090907, 00:17:58.518 "max_latency_us": 6732.334545454545 00:17:58.518 } 00:17:58.518 ], 00:17:58.518 "core_count": 1 00:17:58.518 } 00:17:58.518 10:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:58.518 10:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:58.518 10:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:58.518 10:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:58.518 10:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:58.518 | select(.opcode=="crc32c") 00:17:58.518 | "\(.module_name) \(.executed)"' 00:17:58.518 10:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:58.518 10:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:58.518 10:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:58.518 10:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:58.518 10:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80332 00:17:58.518 10:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 80332 ']' 00:17:58.518 10:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 80332 00:17:58.518 10:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:17:58.518 10:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:58.518 10:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80332 00:17:58.777 killing process with pid 80332 00:17:58.777 Received shutdown signal, test time was about 2.000000 seconds 00:17:58.777 00:17:58.777 Latency(us) 00:17:58.777 [2024-11-15T10:35:59.630Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:58.777 [2024-11-15T10:35:59.630Z] =================================================================================================================== 00:17:58.777 [2024-11-15T10:35:59.630Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:58.777 10:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:17:58.777 10:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:17:58.778 10:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80332' 00:17:58.778 10:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 80332 00:17:58.778 10:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 80332 00:17:58.778 10:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 80124 00:17:58.778 10:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 80124 ']' 00:17:58.778 10:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 80124 00:17:58.778 10:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:17:58.778 10:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:58.778 10:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80124 00:17:58.778 killing process with pid 80124 00:17:58.778 10:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:58.778 10:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:58.778 10:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80124' 00:17:58.778 10:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 80124 00:17:58.778 10:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 80124 00:17:59.037 ************************************ 00:17:59.037 END TEST nvmf_digest_clean 00:17:59.037 ************************************ 00:17:59.037 00:17:59.037 real 0m18.859s 00:17:59.037 user 0m37.337s 00:17:59.037 sys 0m4.645s 00:17:59.037 10:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:59.037 10:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:59.037 10:35:59 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:17:59.037 10:35:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:17:59.037 10:35:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:59.037 10:35:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:17:59.037 ************************************ 00:17:59.037 START TEST nvmf_digest_error 00:17:59.037 ************************************ 00:17:59.037 10:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1127 -- # run_digest_error 00:17:59.037 10:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:17:59.037 10:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:59.037 10:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:59.037 10:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:59.037 10:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=80421 00:17:59.037 10:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:59.037 10:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 80421 00:17:59.037 10:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 80421 ']' 00:17:59.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:59.037 10:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:59.037 10:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:59.037 10:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:59.037 10:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:59.037 10:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:59.366 [2024-11-15 10:35:59.941020] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:17:59.366 [2024-11-15 10:35:59.941149] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:59.366 [2024-11-15 10:36:00.085870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.366 [2024-11-15 10:36:00.145315] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:59.366 [2024-11-15 10:36:00.145366] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:59.366 [2024-11-15 10:36:00.145378] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:59.366 [2024-11-15 10:36:00.145387] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:59.366 [2024-11-15 10:36:00.145394] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:59.366 [2024-11-15 10:36:00.145827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:00.328 10:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:00.328 10:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:18:00.328 10:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:00.328 10:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:00.328 10:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:00.328 10:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:00.328 10:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:18:00.328 10:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.328 10:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:00.328 [2024-11-15 10:36:00.950364] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:18:00.328 10:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.328 10:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:18:00.328 10:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:18:00.328 10:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.328 10:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:00.328 [2024-11-15 10:36:01.012281] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:00.328 null0 00:18:00.328 [2024-11-15 10:36:01.065677] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:00.328 [2024-11-15 10:36:01.089781] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:00.328 10:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.328 10:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:18:00.328 10:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:00.328 10:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:18:00.328 10:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:18:00.328 10:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:18:00.328 10:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80453 00:18:00.328 10:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:18:00.328 10:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80453 /var/tmp/bperf.sock 00:18:00.328 10:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 80453 ']' 00:18:00.328 10:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:00.328 10:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:00.328 10:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:00.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:00.328 10:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:00.328 10:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:00.328 [2024-11-15 10:36:01.155319] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:18:00.328 [2024-11-15 10:36:01.155667] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80453 ] 00:18:00.587 [2024-11-15 10:36:01.306473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.587 [2024-11-15 10:36:01.369109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:00.587 [2024-11-15 10:36:01.425843] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:00.846 10:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:00.846 10:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:18:00.846 10:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:00.846 10:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:01.104 10:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:01.104 10:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.104 10:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:01.104 10:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.104 10:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:01.104 10:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:01.363 nvme0n1 00:18:01.363 10:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:18:01.363 10:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.363 10:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:01.363 10:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.363 10:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:01.363 10:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:01.622 Running I/O for 2 seconds... 00:18:01.622 [2024-11-15 10:36:02.291762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:01.622 [2024-11-15 10:36:02.292020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.622 [2024-11-15 10:36:02.292041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.622 [2024-11-15 10:36:02.309286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:01.622 [2024-11-15 10:36:02.309342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.622 [2024-11-15 10:36:02.309357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.622 [2024-11-15 10:36:02.326472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:01.622 [2024-11-15 10:36:02.326656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.622 [2024-11-15 10:36:02.326674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.622 [2024-11-15 10:36:02.343782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:01.622 [2024-11-15 10:36:02.343835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.622 [2024-11-15 10:36:02.343849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.622 [2024-11-15 10:36:02.360888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:01.622 [2024-11-15 10:36:02.360931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.622 [2024-11-15 10:36:02.360944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.622 [2024-11-15 10:36:02.377961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:01.622 [2024-11-15 10:36:02.378003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.622 [2024-11-15 10:36:02.378017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.622 [2024-11-15 10:36:02.395010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:01.622 [2024-11-15 10:36:02.395065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.622 [2024-11-15 10:36:02.395081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.622 [2024-11-15 10:36:02.412087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:01.622 [2024-11-15 10:36:02.412259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.622 [2024-11-15 10:36:02.412278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.622 [2024-11-15 10:36:02.429318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:01.622 [2024-11-15 10:36:02.429359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.622 [2024-11-15 10:36:02.429373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.622 [2024-11-15 10:36:02.446332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:01.622 [2024-11-15 10:36:02.446498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.622 [2024-11-15 10:36:02.446516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.622 [2024-11-15 10:36:02.463524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:01.622 [2024-11-15 10:36:02.463566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:13958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.622 [2024-11-15 10:36:02.463580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.881 [2024-11-15 10:36:02.480732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:01.881 [2024-11-15 10:36:02.480904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:24492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.881 [2024-11-15 10:36:02.480923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.881 [2024-11-15 10:36:02.498015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:01.881 [2024-11-15 10:36:02.498073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.881 [2024-11-15 10:36:02.498088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.881 [2024-11-15 10:36:02.515202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:01.881 [2024-11-15 10:36:02.515395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:15174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.881 [2024-11-15 10:36:02.515423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.881 [2024-11-15 10:36:02.532572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:01.881 [2024-11-15 10:36:02.532615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.881 [2024-11-15 10:36:02.532629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.881 [2024-11-15 10:36:02.549706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:01.881 [2024-11-15 10:36:02.549750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:2884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.881 [2024-11-15 10:36:02.549765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.881 [2024-11-15 10:36:02.566865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:01.881 [2024-11-15 10:36:02.566917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:24106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.881 [2024-11-15 10:36:02.566931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.881 [2024-11-15 10:36:02.584156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:01.881 [2024-11-15 10:36:02.584203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.881 [2024-11-15 10:36:02.584218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.881 [2024-11-15 10:36:02.602472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:01.881 [2024-11-15 10:36:02.602512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.881 [2024-11-15 10:36:02.602526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.881 [2024-11-15 10:36:02.619586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:01.881 [2024-11-15 10:36:02.619626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:9780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.881 [2024-11-15 10:36:02.619640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.881 [2024-11-15 10:36:02.640408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:01.881 [2024-11-15 10:36:02.640507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:2045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.881 [2024-11-15 10:36:02.640539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.881 [2024-11-15 10:36:02.660192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:01.881 [2024-11-15 10:36:02.660282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:5276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.881 [2024-11-15 10:36:02.660302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.881 [2024-11-15 10:36:02.680093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:01.881 [2024-11-15 10:36:02.680180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:21814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.881 [2024-11-15 10:36:02.680198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.882 [2024-11-15 10:36:02.699796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:01.882 [2024-11-15 10:36:02.699933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.882 [2024-11-15 10:36:02.699953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.882 [2024-11-15 10:36:02.720109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:01.882 [2024-11-15 10:36:02.720215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.882 [2024-11-15 10:36:02.720235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.141 [2024-11-15 10:36:02.740085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:02.141 [2024-11-15 10:36:02.740175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.141 [2024-11-15 10:36:02.740196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.141 [2024-11-15 10:36:02.759629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:02.141 [2024-11-15 10:36:02.759730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:19015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.141 [2024-11-15 10:36:02.759755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.141 [2024-11-15 10:36:02.778861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:02.141 [2024-11-15 10:36:02.778953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:8039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.141 [2024-11-15 10:36:02.778980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.141 [2024-11-15 10:36:02.798958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:02.141 [2024-11-15 10:36:02.799075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:2017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.141 [2024-11-15 10:36:02.799096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.141 [2024-11-15 10:36:02.819044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:02.141 [2024-11-15 10:36:02.819165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:5425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.141 [2024-11-15 10:36:02.819185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.141 [2024-11-15 10:36:02.838706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:02.141 [2024-11-15 10:36:02.838806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.141 [2024-11-15 10:36:02.838825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.141 [2024-11-15 10:36:02.858075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:02.141 [2024-11-15 10:36:02.858151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:5475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.141 [2024-11-15 10:36:02.858170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.141 [2024-11-15 10:36:02.877162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:02.141 [2024-11-15 10:36:02.877496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:17477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.141 [2024-11-15 10:36:02.877520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.141 [2024-11-15 10:36:02.897007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:02.141 [2024-11-15 10:36:02.897298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:20642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.141 [2024-11-15 10:36:02.897321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.141 [2024-11-15 10:36:02.919614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:02.141 [2024-11-15 10:36:02.919696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:6076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.141 [2024-11-15 10:36:02.919713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.141 [2024-11-15 10:36:02.937151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:02.141 [2024-11-15 10:36:02.937194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:17968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.141 [2024-11-15 10:36:02.937239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.141 [2024-11-15 10:36:02.955092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:02.141 [2024-11-15 10:36:02.955137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:24167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.141 [2024-11-15 10:36:02.955152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.141 [2024-11-15 10:36:02.972724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:02.141 [2024-11-15 10:36:02.972765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.141 [2024-11-15 10:36:02.972795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.141 [2024-11-15 10:36:02.989893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:02.141 [2024-11-15 10:36:02.989935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:9666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.141 [2024-11-15 10:36:02.989965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.400 [2024-11-15 10:36:03.006801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:02.400 [2024-11-15 10:36:03.006842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:20416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.400 [2024-11-15 10:36:03.006871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.400 [2024-11-15 10:36:03.024090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:02.400 [2024-11-15 10:36:03.024141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:1096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.400 [2024-11-15 10:36:03.024156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.400 [2024-11-15 10:36:03.041312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:02.400 [2024-11-15 10:36:03.041352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:19395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.400 [2024-11-15 10:36:03.041382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.400 [2024-11-15 10:36:03.058598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:02.400 [2024-11-15 10:36:03.058801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:13474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.400 [2024-11-15 10:36:03.058820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.400 [2024-11-15 10:36:03.076484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:02.400 [2024-11-15 10:36:03.076699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.400 [2024-11-15 10:36:03.076726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.400 [2024-11-15 10:36:03.093244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:02.400 [2024-11-15 10:36:03.093292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.400 [2024-11-15 10:36:03.093330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.400 [2024-11-15 10:36:03.110798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:02.400 [2024-11-15 10:36:03.110862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:5567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.400 [2024-11-15 10:36:03.110893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.400 [2024-11-15 10:36:03.127448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:02.400 [2024-11-15 10:36:03.127514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:24198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.400 [2024-11-15 10:36:03.127545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.400 [2024-11-15 10:36:03.143808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:02.400 [2024-11-15 10:36:03.143870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:6451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.400 [2024-11-15 10:36:03.143901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.400 [2024-11-15 10:36:03.160280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:02.400 [2024-11-15 10:36:03.160321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:4350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.400 [2024-11-15 10:36:03.160351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.400 [2024-11-15 10:36:03.176593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:02.400 [2024-11-15 10:36:03.176631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:19439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.400 [2024-11-15 10:36:03.176659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.400 [2024-11-15 10:36:03.192745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:02.400 [2024-11-15 10:36:03.192783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:23175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.400 [2024-11-15 10:36:03.192812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.400 [2024-11-15 10:36:03.208900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:02.400 [2024-11-15 10:36:03.208938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:25395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.400 [2024-11-15 10:36:03.208967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.400 [2024-11-15 10:36:03.225119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:02.400 [2024-11-15 10:36:03.225157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:13190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.400 [2024-11-15 10:36:03.225186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.401 [2024-11-15 10:36:03.241575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:02.401 [2024-11-15 10:36:03.241614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:19474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.401 [2024-11-15 10:36:03.241643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.660 [2024-11-15 10:36:03.258184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:02.660 [2024-11-15 10:36:03.258222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:6831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.660 [2024-11-15 10:36:03.258250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.660 14169.00 IOPS, 55.35 MiB/s [2024-11-15T10:36:03.513Z] [2024-11-15 10:36:03.276129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:02.660 [2024-11-15 10:36:03.276168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:10795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.660 [2024-11-15 10:36:03.276198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.660 [2024-11-15 10:36:03.292742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:02.660 [2024-11-15 10:36:03.292784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:23860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.660 [2024-11-15 10:36:03.292814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.660 [2024-11-15 10:36:03.310537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:02.660 [2024-11-15 10:36:03.310726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:18771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.660 [2024-11-15 10:36:03.310744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.660 [2024-11-15 10:36:03.328352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:02.660 [2024-11-15 10:36:03.328393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:5623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.660 [2024-11-15 10:36:03.328423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.660 [2024-11-15 10:36:03.345597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:02.660 [2024-11-15 10:36:03.345635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:10821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.660 [2024-11-15 10:36:03.345665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.660 [2024-11-15 10:36:03.362270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:02.660 [2024-11-15 10:36:03.362309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:11192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.660 [2024-11-15 10:36:03.362338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.660 [2024-11-15 10:36:03.378426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:02.660 [2024-11-15 10:36:03.378466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:23122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.660 [2024-11-15 10:36:03.378495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.660 [2024-11-15 10:36:03.394662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:02.660 [2024-11-15 10:36:03.394701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:23112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.660 [2024-11-15 10:36:03.394734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.660 [2024-11-15 10:36:03.417932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:02.660 [2024-11-15 10:36:03.417973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:13844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.660 [2024-11-15 10:36:03.418003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.660 [2024-11-15 10:36:03.434101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:02.660 [2024-11-15 10:36:03.434139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.660 [2024-11-15 10:36:03.434169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.660 [2024-11-15 10:36:03.451093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:02.660 [2024-11-15 10:36:03.451161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.660 [2024-11-15 10:36:03.451176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.660 [2024-11-15 10:36:03.468617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:02.660 [2024-11-15 10:36:03.468679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:2685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.660 [2024-11-15 10:36:03.468694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.660 [2024-11-15 10:36:03.486369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:02.660 [2024-11-15 10:36:03.486591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:17565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.660 [2024-11-15 10:36:03.486609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.660 [2024-11-15 10:36:03.504318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:02.660 [2024-11-15 10:36:03.504607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.660 [2024-11-15 10:36:03.504628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.919 [2024-11-15 10:36:03.522519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:02.919 [2024-11-15 10:36:03.522594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.919 [2024-11-15 10:36:03.522628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.919 [2024-11-15 10:36:03.540741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:02.919 [2024-11-15 10:36:03.540814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:3166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.919 [2024-11-15 10:36:03.540846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.919 [2024-11-15 10:36:03.558446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:02.919 [2024-11-15 10:36:03.558519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:18263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.919 [2024-11-15 10:36:03.558534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.919 [2024-11-15 10:36:03.576035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:02.919 [2024-11-15 10:36:03.576109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.919 [2024-11-15 10:36:03.576124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.919 [2024-11-15 10:36:03.593513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:02.919 [2024-11-15 10:36:03.593554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:21360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.919 [2024-11-15 10:36:03.593584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.919 [2024-11-15 10:36:03.611915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:02.919 [2024-11-15 10:36:03.612142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.919 [2024-11-15 10:36:03.612162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.919 [2024-11-15 10:36:03.628865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:02.919 [2024-11-15 10:36:03.629103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:19495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.919 [2024-11-15 10:36:03.629236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.919 [2024-11-15 10:36:03.645430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:02.919 [2024-11-15 10:36:03.645652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:5597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.919 [2024-11-15 10:36:03.645795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.919 [2024-11-15 10:36:03.662326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:02.919 [2024-11-15 10:36:03.662548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:7240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.919 [2024-11-15 10:36:03.662673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.919 [2024-11-15 10:36:03.680148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:02.919 [2024-11-15 10:36:03.680343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:4860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.919 [2024-11-15 10:36:03.680474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.919 [2024-11-15 10:36:03.697994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:02.919 [2024-11-15 10:36:03.698199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:4963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.919 [2024-11-15 10:36:03.698336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.919 [2024-11-15 10:36:03.716040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:02.919 [2024-11-15 10:36:03.716309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:12234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.919 [2024-11-15 10:36:03.716561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.919 [2024-11-15 10:36:03.734220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:02.919 [2024-11-15 10:36:03.734524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:10112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.919 [2024-11-15 10:36:03.734655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.919 [2024-11-15 10:36:03.752401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:02.919 [2024-11-15 10:36:03.752642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:11260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.919 [2024-11-15 10:36:03.752924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.919 [2024-11-15 10:36:03.770224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:02.919 [2024-11-15 10:36:03.770413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:13775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.178 [2024-11-15 10:36:03.770563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.178 [2024-11-15 10:36:03.787910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:03.178 [2024-11-15 10:36:03.787955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:5991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.178 [2024-11-15 10:36:03.787970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.178 [2024-11-15 10:36:03.805003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:03.178 [2024-11-15 10:36:03.805047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.178 [2024-11-15 10:36:03.805080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.178 [2024-11-15 10:36:03.822155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:03.178 [2024-11-15 10:36:03.822307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.178 [2024-11-15 10:36:03.822326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.178 [2024-11-15 10:36:03.839509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:03.178 [2024-11-15 10:36:03.839554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:13987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.178 [2024-11-15 10:36:03.839568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.178 [2024-11-15 10:36:03.856700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:03.178 [2024-11-15 10:36:03.856747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:8693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.178 [2024-11-15 10:36:03.856762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.178 [2024-11-15 10:36:03.873883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:03.178 [2024-11-15 10:36:03.873925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:9080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.178 [2024-11-15 10:36:03.873939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.178 [2024-11-15 10:36:03.891158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:03.178 [2024-11-15 10:36:03.891197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:11021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.178 [2024-11-15 10:36:03.891226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.178 [2024-11-15 10:36:03.907779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:03.178 [2024-11-15 10:36:03.907948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:24962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.178 [2024-11-15 10:36:03.907966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.178 [2024-11-15 10:36:03.924895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:03.178 [2024-11-15 10:36:03.925096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:10406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.178 [2024-11-15 10:36:03.925114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.178 [2024-11-15 10:36:03.941722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:03.178 [2024-11-15 10:36:03.941763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:9474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.178 [2024-11-15 10:36:03.941792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.178 [2024-11-15 10:36:03.958548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:03.178 [2024-11-15 10:36:03.958588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:2306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.178 [2024-11-15 10:36:03.958618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.178 [2024-11-15 10:36:03.975856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:03.178 [2024-11-15 10:36:03.975907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:3202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.178 [2024-11-15 10:36:03.975922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.178 [2024-11-15 10:36:03.993572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:03.178 [2024-11-15 10:36:03.993859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:17052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.178 [2024-11-15 10:36:03.993879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.178 [2024-11-15 10:36:04.011305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:03.178 [2024-11-15 10:36:04.011414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:7810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.178 [2024-11-15 10:36:04.011431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.437 [2024-11-15 10:36:04.028425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:03.438 [2024-11-15 10:36:04.028747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:1893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.438 [2024-11-15 10:36:04.028767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.438 [2024-11-15 10:36:04.046131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:03.438 [2024-11-15 10:36:04.046300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:21034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.438 [2024-11-15 10:36:04.046319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.438 [2024-11-15 10:36:04.063783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:03.438 [2024-11-15 10:36:04.063827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.438 [2024-11-15 10:36:04.063857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.438 [2024-11-15 10:36:04.081030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:03.438 [2024-11-15 10:36:04.081229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.438 [2024-11-15 10:36:04.081247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.438 [2024-11-15 10:36:04.098017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:03.438 [2024-11-15 10:36:04.098098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:3969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.438 [2024-11-15 10:36:04.098114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.438 [2024-11-15 10:36:04.114920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:03.438 [2024-11-15 10:36:04.114962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.438 [2024-11-15 10:36:04.114992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.438 [2024-11-15 10:36:04.131896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:03.438 [2024-11-15 10:36:04.132092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.438 [2024-11-15 10:36:04.132111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.438 [2024-11-15 10:36:04.149857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:03.438 [2024-11-15 10:36:04.149903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:15052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.438 [2024-11-15 10:36:04.149933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.438 [2024-11-15 10:36:04.166626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:03.438 [2024-11-15 10:36:04.166667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:15246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.438 [2024-11-15 10:36:04.166697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.438 [2024-11-15 10:36:04.183524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:03.438 [2024-11-15 10:36:04.183717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:17844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.438 [2024-11-15 10:36:04.183736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.438 [2024-11-15 10:36:04.200788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:03.438 [2024-11-15 10:36:04.200830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:25206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.438 [2024-11-15 10:36:04.200860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.438 [2024-11-15 10:36:04.217817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:03.438 [2024-11-15 10:36:04.217857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:16864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.438 [2024-11-15 10:36:04.217888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.438 [2024-11-15 10:36:04.234607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:03.438 [2024-11-15 10:36:04.234657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:8377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.438 [2024-11-15 10:36:04.234687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.438 [2024-11-15 10:36:04.251751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1491f20) 00:18:03.438 [2024-11-15 10:36:04.252103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:23761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.438 [2024-11-15 10:36:04.252125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.438 [2024-11-15 10:36:04.270145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0 14358.50 IOPS, 56.09 MiB/s [2024-11-15T10:36:04.291Z] x1491f20) 00:18:03.438 [2024-11-15 10:36:04.270332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:8588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.438 [2024-11-15 10:36:04.270350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.438 00:18:03.438 Latency(us) 00:18:03.438 [2024-11-15T10:36:04.291Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:03.438 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:18:03.438 nvme0n1 : 2.01 14385.82 56.19 0.00 0.00 8890.03 7864.32 31933.91 00:18:03.438 [2024-11-15T10:36:04.291Z] =================================================================================================================== 00:18:03.438 [2024-11-15T10:36:04.291Z] Total : 14385.82 56.19 0.00 0.00 8890.03 7864.32 31933.91 00:18:03.438 { 00:18:03.438 "results": [ 00:18:03.438 { 00:18:03.438 "job": "nvme0n1", 00:18:03.438 "core_mask": "0x2", 00:18:03.438 "workload": "randread", 00:18:03.438 "status": "finished", 00:18:03.438 "queue_depth": 128, 00:18:03.438 "io_size": 4096, 00:18:03.438 "runtime": 2.005099, 00:18:03.438 "iops": 14385.823343386037, 00:18:03.438 "mibps": 56.19462243510171, 00:18:03.438 "io_failed": 0, 00:18:03.438 "io_timeout": 0, 00:18:03.438 "avg_latency_us": 8890.032395089742, 00:18:03.438 "min_latency_us": 7864.32, 00:18:03.438 "max_latency_us": 31933.905454545453 00:18:03.438 } 00:18:03.438 ], 00:18:03.438 "core_count": 1 00:18:03.438 } 00:18:03.696 10:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:03.696 10:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:03.696 | .driver_specific 00:18:03.696 | .nvme_error 00:18:03.696 | .status_code 00:18:03.696 | .command_transient_transport_error' 00:18:03.696 10:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:03.696 10:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:03.955 10:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 113 > 0 )) 00:18:03.955 10:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80453 00:18:03.955 10:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 80453 ']' 00:18:03.955 10:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 80453 00:18:03.955 10:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:18:03.955 10:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:03.955 10:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80453 00:18:03.955 killing process with pid 80453 00:18:03.955 Received shutdown signal, test time was about 2.000000 seconds 00:18:03.955 00:18:03.955 Latency(us) 00:18:03.955 [2024-11-15T10:36:04.808Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:03.955 [2024-11-15T10:36:04.808Z] =================================================================================================================== 00:18:03.955 [2024-11-15T10:36:04.808Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:03.955 10:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:03.955 10:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:03.955 10:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80453' 00:18:03.955 10:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 80453 00:18:03.955 10:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 80453 00:18:04.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:04.214 10:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:18:04.214 10:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:04.214 10:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:18:04.214 10:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:18:04.214 10:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:18:04.214 10:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80505 00:18:04.214 10:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:18:04.214 10:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80505 /var/tmp/bperf.sock 00:18:04.214 10:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 80505 ']' 00:18:04.214 10:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:04.214 10:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:04.214 10:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:04.214 10:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:04.214 10:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:04.214 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:04.214 Zero copy mechanism will not be used. 00:18:04.214 [2024-11-15 10:36:04.881630] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:18:04.214 [2024-11-15 10:36:04.881726] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80505 ] 00:18:04.214 [2024-11-15 10:36:05.022516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.473 [2024-11-15 10:36:05.083243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:04.473 [2024-11-15 10:36:05.137516] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:04.473 10:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:04.473 10:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:18:04.473 10:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:04.473 10:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:04.731 10:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:04.731 10:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.731 10:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:04.989 10:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.989 10:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:04.989 10:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:05.248 nvme0n1 00:18:05.248 10:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:18:05.248 10:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.248 10:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:05.248 10:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.248 10:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:05.248 10:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:05.248 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:05.248 Zero copy mechanism will not be used. 00:18:05.248 Running I/O for 2 seconds... 00:18:05.248 [2024-11-15 10:36:06.053412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.248 [2024-11-15 10:36:06.053507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.248 [2024-11-15 10:36:06.053527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:05.248 [2024-11-15 10:36:06.059195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.248 [2024-11-15 10:36:06.059246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.248 [2024-11-15 10:36:06.059266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:05.248 [2024-11-15 10:36:06.064780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.248 [2024-11-15 10:36:06.064829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.248 [2024-11-15 10:36:06.064851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:05.248 [2024-11-15 10:36:06.070548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.248 [2024-11-15 10:36:06.070925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.248 [2024-11-15 10:36:06.070951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:05.248 [2024-11-15 10:36:06.076690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.248 [2024-11-15 10:36:06.076775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.248 [2024-11-15 10:36:06.076794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:05.248 [2024-11-15 10:36:06.082230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.248 [2024-11-15 10:36:06.082526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.248 [2024-11-15 10:36:06.082549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:05.248 [2024-11-15 10:36:06.088291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.248 [2024-11-15 10:36:06.088378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.248 [2024-11-15 10:36:06.088407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:05.248 [2024-11-15 10:36:06.093624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.248 [2024-11-15 10:36:06.093835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.248 [2024-11-15 10:36:06.093854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:05.248 [2024-11-15 10:36:06.099019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.248 [2024-11-15 10:36:06.099077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.248 [2024-11-15 10:36:06.099091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:05.508 [2024-11-15 10:36:06.103990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.508 [2024-11-15 10:36:06.104041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.508 [2024-11-15 10:36:06.104076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:05.508 [2024-11-15 10:36:06.108704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.508 [2024-11-15 10:36:06.108906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.508 [2024-11-15 10:36:06.108923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:05.508 [2024-11-15 10:36:06.113781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.508 [2024-11-15 10:36:06.113822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.508 [2024-11-15 10:36:06.113846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:05.508 [2024-11-15 10:36:06.118658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.508 [2024-11-15 10:36:06.118711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.508 [2024-11-15 10:36:06.118727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:05.508 [2024-11-15 10:36:06.123574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.508 [2024-11-15 10:36:06.123757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.508 [2024-11-15 10:36:06.123775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:05.508 [2024-11-15 10:36:06.128713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.508 [2024-11-15 10:36:06.128751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.508 [2024-11-15 10:36:06.128775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:05.508 [2024-11-15 10:36:06.133663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.508 [2024-11-15 10:36:06.133712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.508 [2024-11-15 10:36:06.133734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:05.508 [2024-11-15 10:36:06.138480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.508 [2024-11-15 10:36:06.138654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.508 [2024-11-15 10:36:06.138674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:05.508 [2024-11-15 10:36:06.143521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.508 [2024-11-15 10:36:06.143562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.508 [2024-11-15 10:36:06.143576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:05.508 [2024-11-15 10:36:06.148468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.508 [2024-11-15 10:36:06.148507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.508 [2024-11-15 10:36:06.148530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:05.508 [2024-11-15 10:36:06.153418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.508 [2024-11-15 10:36:06.153593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.508 [2024-11-15 10:36:06.153610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:05.508 [2024-11-15 10:36:06.158576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.508 [2024-11-15 10:36:06.158616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.508 [2024-11-15 10:36:06.158639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:05.508 [2024-11-15 10:36:06.163571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.508 [2024-11-15 10:36:06.163612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.508 [2024-11-15 10:36:06.163625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:05.508 [2024-11-15 10:36:06.168579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.508 [2024-11-15 10:36:06.168769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.508 [2024-11-15 10:36:06.168787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:05.509 [2024-11-15 10:36:06.173890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.509 [2024-11-15 10:36:06.173931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.509 [2024-11-15 10:36:06.173945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:05.509 [2024-11-15 10:36:06.178885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.509 [2024-11-15 10:36:06.178923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.509 [2024-11-15 10:36:06.178937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:05.509 [2024-11-15 10:36:06.184616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.509 [2024-11-15 10:36:06.184667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.509 [2024-11-15 10:36:06.184690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:05.509 [2024-11-15 10:36:06.190334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.509 [2024-11-15 10:36:06.190537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.509 [2024-11-15 10:36:06.190556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:05.509 [2024-11-15 10:36:06.195440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.509 [2024-11-15 10:36:06.195481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.509 [2024-11-15 10:36:06.195495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:05.509 [2024-11-15 10:36:06.200409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.509 [2024-11-15 10:36:06.200447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.509 [2024-11-15 10:36:06.200471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:05.509 [2024-11-15 10:36:06.205435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.509 [2024-11-15 10:36:06.205612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.509 [2024-11-15 10:36:06.205630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:05.509 [2024-11-15 10:36:06.210536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.509 [2024-11-15 10:36:06.210575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.509 [2024-11-15 10:36:06.210599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:05.509 [2024-11-15 10:36:06.215601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.509 [2024-11-15 10:36:06.215640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.509 [2024-11-15 10:36:06.215654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:05.509 [2024-11-15 10:36:06.220461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.509 [2024-11-15 10:36:06.220632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.509 [2024-11-15 10:36:06.220650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:05.509 [2024-11-15 10:36:06.225597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.509 [2024-11-15 10:36:06.225634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.509 [2024-11-15 10:36:06.225659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:05.509 [2024-11-15 10:36:06.230500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.509 [2024-11-15 10:36:06.230538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.509 [2024-11-15 10:36:06.230562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:05.509 [2024-11-15 10:36:06.235404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.509 [2024-11-15 10:36:06.235568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.509 [2024-11-15 10:36:06.235586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:05.509 [2024-11-15 10:36:06.240647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.509 [2024-11-15 10:36:06.240687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.509 [2024-11-15 10:36:06.240710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:05.509 [2024-11-15 10:36:06.245507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.509 [2024-11-15 10:36:06.245554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.509 [2024-11-15 10:36:06.245576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:05.509 [2024-11-15 10:36:06.250493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.509 [2024-11-15 10:36:06.250672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.509 [2024-11-15 10:36:06.250689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:05.509 [2024-11-15 10:36:06.255720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.509 [2024-11-15 10:36:06.255760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.509 [2024-11-15 10:36:06.255781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:05.509 [2024-11-15 10:36:06.260582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.509 [2024-11-15 10:36:06.260621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.509 [2024-11-15 10:36:06.260643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:05.509 [2024-11-15 10:36:06.265588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.509 [2024-11-15 10:36:06.265777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.509 [2024-11-15 10:36:06.265800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:05.509 [2024-11-15 10:36:06.270849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.509 [2024-11-15 10:36:06.270889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.509 [2024-11-15 10:36:06.270912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:05.509 [2024-11-15 10:36:06.275838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.509 [2024-11-15 10:36:06.275878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.509 [2024-11-15 10:36:06.275899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:05.509 [2024-11-15 10:36:06.280776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.509 [2024-11-15 10:36:06.280966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.510 [2024-11-15 10:36:06.280986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:05.510 [2024-11-15 10:36:06.285921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.510 [2024-11-15 10:36:06.285962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.510 [2024-11-15 10:36:06.285985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:05.510 [2024-11-15 10:36:06.291110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.510 [2024-11-15 10:36:06.291149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.510 [2024-11-15 10:36:06.291162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:05.510 [2024-11-15 10:36:06.296026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.510 [2024-11-15 10:36:06.296097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.510 [2024-11-15 10:36:06.296112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:05.510 [2024-11-15 10:36:06.301141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.510 [2024-11-15 10:36:06.301180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.510 [2024-11-15 10:36:06.301205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:05.510 [2024-11-15 10:36:06.306133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.510 [2024-11-15 10:36:06.306171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.510 [2024-11-15 10:36:06.306195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:05.510 [2024-11-15 10:36:06.310987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.510 [2024-11-15 10:36:06.311027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.510 [2024-11-15 10:36:06.311064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:05.510 [2024-11-15 10:36:06.316014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.510 [2024-11-15 10:36:06.316060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.510 [2024-11-15 10:36:06.316075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:05.510 [2024-11-15 10:36:06.321116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.510 [2024-11-15 10:36:06.321169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.510 [2024-11-15 10:36:06.321192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:05.510 [2024-11-15 10:36:06.326242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.510 [2024-11-15 10:36:06.326281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.510 [2024-11-15 10:36:06.326305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:05.510 [2024-11-15 10:36:06.331193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.510 [2024-11-15 10:36:06.331232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.510 [2024-11-15 10:36:06.331256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:05.510 [2024-11-15 10:36:06.336133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.510 [2024-11-15 10:36:06.336170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.510 [2024-11-15 10:36:06.336184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:05.510 [2024-11-15 10:36:06.341178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.510 [2024-11-15 10:36:06.341228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.510 [2024-11-15 10:36:06.341252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:05.510 [2024-11-15 10:36:06.346193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.510 [2024-11-15 10:36:06.346232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.510 [2024-11-15 10:36:06.346245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:05.510 [2024-11-15 10:36:06.351309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.510 [2024-11-15 10:36:06.351347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.510 [2024-11-15 10:36:06.351397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:05.510 [2024-11-15 10:36:06.356193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.510 [2024-11-15 10:36:06.356232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.510 [2024-11-15 10:36:06.356256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:05.770 [2024-11-15 10:36:06.361058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.770 [2024-11-15 10:36:06.361126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.770 [2024-11-15 10:36:06.361149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:05.770 [2024-11-15 10:36:06.366081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.770 [2024-11-15 10:36:06.366126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.770 [2024-11-15 10:36:06.366141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:05.770 [2024-11-15 10:36:06.370963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.770 [2024-11-15 10:36:06.371001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.770 [2024-11-15 10:36:06.371024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:05.770 [2024-11-15 10:36:06.375919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.770 [2024-11-15 10:36:06.376204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.770 [2024-11-15 10:36:06.376227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:05.770 [2024-11-15 10:36:06.381275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.770 [2024-11-15 10:36:06.381458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.770 [2024-11-15 10:36:06.381582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:05.770 [2024-11-15 10:36:06.386668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.770 [2024-11-15 10:36:06.386850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.770 [2024-11-15 10:36:06.386982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:05.770 [2024-11-15 10:36:06.391947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.770 [2024-11-15 10:36:06.392149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.770 [2024-11-15 10:36:06.392293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:05.770 [2024-11-15 10:36:06.397188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.770 [2024-11-15 10:36:06.397397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.770 [2024-11-15 10:36:06.397553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:05.770 [2024-11-15 10:36:06.402430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.770 [2024-11-15 10:36:06.402632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.770 [2024-11-15 10:36:06.402821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:05.770 [2024-11-15 10:36:06.407914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.770 [2024-11-15 10:36:06.408119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.770 [2024-11-15 10:36:06.408305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:05.770 [2024-11-15 10:36:06.413257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.770 [2024-11-15 10:36:06.413455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.770 [2024-11-15 10:36:06.413655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:05.770 [2024-11-15 10:36:06.418586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.770 [2024-11-15 10:36:06.418793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.770 [2024-11-15 10:36:06.418921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:05.770 [2024-11-15 10:36:06.424179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.771 [2024-11-15 10:36:06.424367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.771 [2024-11-15 10:36:06.424493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:05.771 [2024-11-15 10:36:06.429515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.771 [2024-11-15 10:36:06.429736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.771 [2024-11-15 10:36:06.429843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:05.771 [2024-11-15 10:36:06.434636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.771 [2024-11-15 10:36:06.434677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.771 [2024-11-15 10:36:06.434691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:05.771 [2024-11-15 10:36:06.439392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.771 [2024-11-15 10:36:06.439562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.771 [2024-11-15 10:36:06.439583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:05.771 [2024-11-15 10:36:06.444437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.771 [2024-11-15 10:36:06.444487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.771 [2024-11-15 10:36:06.444511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:05.771 [2024-11-15 10:36:06.449489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.771 [2024-11-15 10:36:06.449528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.771 [2024-11-15 10:36:06.449552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:05.771 [2024-11-15 10:36:06.454453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.771 [2024-11-15 10:36:06.454639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.771 [2024-11-15 10:36:06.454656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:05.771 [2024-11-15 10:36:06.459581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.771 [2024-11-15 10:36:06.459623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.771 [2024-11-15 10:36:06.459637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:05.771 [2024-11-15 10:36:06.464677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.771 [2024-11-15 10:36:06.464727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.771 [2024-11-15 10:36:06.464741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:05.771 [2024-11-15 10:36:06.469746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.771 [2024-11-15 10:36:06.469920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.771 [2024-11-15 10:36:06.469938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:05.771 [2024-11-15 10:36:06.475034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.771 [2024-11-15 10:36:06.475088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.771 [2024-11-15 10:36:06.475103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:05.771 [2024-11-15 10:36:06.480073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.771 [2024-11-15 10:36:06.480124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.771 [2024-11-15 10:36:06.480138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:05.771 [2024-11-15 10:36:06.485009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.771 [2024-11-15 10:36:06.485198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.771 [2024-11-15 10:36:06.485217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:05.771 [2024-11-15 10:36:06.490157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.771 [2024-11-15 10:36:06.490196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.771 [2024-11-15 10:36:06.490210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:05.771 [2024-11-15 10:36:06.495072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.771 [2024-11-15 10:36:06.495111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.771 [2024-11-15 10:36:06.495124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:05.771 [2024-11-15 10:36:06.499955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.771 [2024-11-15 10:36:06.499995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.771 [2024-11-15 10:36:06.500017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:05.771 [2024-11-15 10:36:06.504928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.771 [2024-11-15 10:36:06.505109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.771 [2024-11-15 10:36:06.505128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:05.771 [2024-11-15 10:36:06.510359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.771 [2024-11-15 10:36:06.510401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.771 [2024-11-15 10:36:06.510423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:05.771 [2024-11-15 10:36:06.515340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.771 [2024-11-15 10:36:06.515389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.771 [2024-11-15 10:36:06.515403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:05.771 [2024-11-15 10:36:06.520240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.771 [2024-11-15 10:36:06.520279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.771 [2024-11-15 10:36:06.520294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:05.771 [2024-11-15 10:36:06.525179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.771 [2024-11-15 10:36:06.525217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.771 [2024-11-15 10:36:06.525231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:05.771 [2024-11-15 10:36:06.530098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.771 [2024-11-15 10:36:06.530134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.771 [2024-11-15 10:36:06.530160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:05.771 [2024-11-15 10:36:06.535033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.771 [2024-11-15 10:36:06.535084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.771 [2024-11-15 10:36:06.535099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:05.772 [2024-11-15 10:36:06.539899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.772 [2024-11-15 10:36:06.540086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.772 [2024-11-15 10:36:06.540107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:05.772 [2024-11-15 10:36:06.545272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.772 [2024-11-15 10:36:06.545458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.772 [2024-11-15 10:36:06.545594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:05.772 [2024-11-15 10:36:06.550683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.772 [2024-11-15 10:36:06.550860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.772 [2024-11-15 10:36:06.550994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:05.772 [2024-11-15 10:36:06.556125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.772 [2024-11-15 10:36:06.556327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.772 [2024-11-15 10:36:06.556457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:05.772 [2024-11-15 10:36:06.561594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.772 [2024-11-15 10:36:06.561796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.772 [2024-11-15 10:36:06.561943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:05.772 [2024-11-15 10:36:06.567143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.772 [2024-11-15 10:36:06.567371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.772 [2024-11-15 10:36:06.567526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:05.772 [2024-11-15 10:36:06.572620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.772 [2024-11-15 10:36:06.572803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.772 [2024-11-15 10:36:06.572932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:05.772 [2024-11-15 10:36:06.578004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.772 [2024-11-15 10:36:06.578233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.772 [2024-11-15 10:36:06.578445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:05.772 [2024-11-15 10:36:06.583816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.772 [2024-11-15 10:36:06.584001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.772 [2024-11-15 10:36:06.584278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:05.772 [2024-11-15 10:36:06.589426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.772 [2024-11-15 10:36:06.589610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.772 [2024-11-15 10:36:06.589767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:05.772 [2024-11-15 10:36:06.594912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.772 [2024-11-15 10:36:06.594953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.772 [2024-11-15 10:36:06.594968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:05.772 [2024-11-15 10:36:06.600094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.772 [2024-11-15 10:36:06.600135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.772 [2024-11-15 10:36:06.600158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:05.772 [2024-11-15 10:36:06.605111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.772 [2024-11-15 10:36:06.605150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.772 [2024-11-15 10:36:06.605177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:05.772 [2024-11-15 10:36:06.610027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.772 [2024-11-15 10:36:06.610082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.772 [2024-11-15 10:36:06.610096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:05.772 [2024-11-15 10:36:06.614975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.772 [2024-11-15 10:36:06.615016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.772 [2024-11-15 10:36:06.615039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:05.772 [2024-11-15 10:36:06.619803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:05.772 [2024-11-15 10:36:06.619842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.772 [2024-11-15 10:36:06.619865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:06.032 [2024-11-15 10:36:06.624761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.032 [2024-11-15 10:36:06.624808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.032 [2024-11-15 10:36:06.624826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:06.032 [2024-11-15 10:36:06.629665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.032 [2024-11-15 10:36:06.629705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.032 [2024-11-15 10:36:06.629728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:06.032 [2024-11-15 10:36:06.634517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.032 [2024-11-15 10:36:06.634702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.032 [2024-11-15 10:36:06.634721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:06.032 [2024-11-15 10:36:06.639620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.032 [2024-11-15 10:36:06.639661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.032 [2024-11-15 10:36:06.639675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:06.032 [2024-11-15 10:36:06.644433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.032 [2024-11-15 10:36:06.644473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.032 [2024-11-15 10:36:06.644486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:06.032 [2024-11-15 10:36:06.649366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.032 [2024-11-15 10:36:06.649539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.032 [2024-11-15 10:36:06.649557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:06.032 [2024-11-15 10:36:06.654378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.032 [2024-11-15 10:36:06.654419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.032 [2024-11-15 10:36:06.654442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:06.032 [2024-11-15 10:36:06.659228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.032 [2024-11-15 10:36:06.659268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.032 [2024-11-15 10:36:06.659293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:06.032 [2024-11-15 10:36:06.664143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.032 [2024-11-15 10:36:06.664183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.032 [2024-11-15 10:36:06.664205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:06.032 [2024-11-15 10:36:06.668899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.032 [2024-11-15 10:36:06.669086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.032 [2024-11-15 10:36:06.669104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:06.032 [2024-11-15 10:36:06.673829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.032 [2024-11-15 10:36:06.673880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.032 [2024-11-15 10:36:06.673903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:06.032 [2024-11-15 10:36:06.678691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.032 [2024-11-15 10:36:06.678730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.032 [2024-11-15 10:36:06.678753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:06.032 [2024-11-15 10:36:06.683479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.032 [2024-11-15 10:36:06.683518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.032 [2024-11-15 10:36:06.683532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:06.032 [2024-11-15 10:36:06.688400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.032 [2024-11-15 10:36:06.688578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.032 [2024-11-15 10:36:06.688597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:06.033 [2024-11-15 10:36:06.693521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.033 [2024-11-15 10:36:06.693561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.033 [2024-11-15 10:36:06.693584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:06.033 [2024-11-15 10:36:06.698757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.033 [2024-11-15 10:36:06.698795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.033 [2024-11-15 10:36:06.698809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:06.033 [2024-11-15 10:36:06.704252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.033 [2024-11-15 10:36:06.704290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.033 [2024-11-15 10:36:06.704314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:06.033 [2024-11-15 10:36:06.709334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.033 [2024-11-15 10:36:06.709372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.033 [2024-11-15 10:36:06.709386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:06.033 [2024-11-15 10:36:06.714177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.033 [2024-11-15 10:36:06.714214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.033 [2024-11-15 10:36:06.714239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:06.033 [2024-11-15 10:36:06.719241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.033 [2024-11-15 10:36:06.719284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.033 [2024-11-15 10:36:06.719308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:06.033 [2024-11-15 10:36:06.724100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.033 [2024-11-15 10:36:06.724139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.033 [2024-11-15 10:36:06.724162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:06.033 [2024-11-15 10:36:06.729029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.033 [2024-11-15 10:36:06.729094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.033 [2024-11-15 10:36:06.729119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:06.033 [2024-11-15 10:36:06.734070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.033 [2024-11-15 10:36:06.734132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.033 [2024-11-15 10:36:06.734156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:06.033 [2024-11-15 10:36:06.739001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.033 [2024-11-15 10:36:06.739040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.033 [2024-11-15 10:36:06.739094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:06.033 [2024-11-15 10:36:06.744024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.033 [2024-11-15 10:36:06.744091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.033 [2024-11-15 10:36:06.744108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:06.033 [2024-11-15 10:36:06.749048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.033 [2024-11-15 10:36:06.749101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.033 [2024-11-15 10:36:06.749116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:06.033 [2024-11-15 10:36:06.753931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.033 [2024-11-15 10:36:06.754204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.033 [2024-11-15 10:36:06.754222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:06.033 [2024-11-15 10:36:06.759183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.033 [2024-11-15 10:36:06.759220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.033 [2024-11-15 10:36:06.759244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:06.033 [2024-11-15 10:36:06.764241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.033 [2024-11-15 10:36:06.764277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.033 [2024-11-15 10:36:06.764302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:06.033 [2024-11-15 10:36:06.769096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.033 [2024-11-15 10:36:06.769132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.033 [2024-11-15 10:36:06.769145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:06.033 [2024-11-15 10:36:06.773821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.033 [2024-11-15 10:36:06.773859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.033 [2024-11-15 10:36:06.773883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:06.033 [2024-11-15 10:36:06.778829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.033 [2024-11-15 10:36:06.778868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.033 [2024-11-15 10:36:06.778883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:06.033 [2024-11-15 10:36:06.783894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.033 [2024-11-15 10:36:06.783938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.033 [2024-11-15 10:36:06.783951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:06.033 [2024-11-15 10:36:06.788974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.033 [2024-11-15 10:36:06.789304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.033 [2024-11-15 10:36:06.789324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:06.033 [2024-11-15 10:36:06.794314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.033 [2024-11-15 10:36:06.794351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.033 [2024-11-15 10:36:06.794376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:06.034 [2024-11-15 10:36:06.799322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.034 [2024-11-15 10:36:06.799367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.034 [2024-11-15 10:36:06.799410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:06.034 [2024-11-15 10:36:06.804456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.034 [2024-11-15 10:36:06.804682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.034 [2024-11-15 10:36:06.804700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:06.034 [2024-11-15 10:36:06.809825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.034 [2024-11-15 10:36:06.810024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.034 [2024-11-15 10:36:06.810169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:06.034 [2024-11-15 10:36:06.815160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.034 [2024-11-15 10:36:06.815377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.034 [2024-11-15 10:36:06.815573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:06.034 [2024-11-15 10:36:06.820742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.034 [2024-11-15 10:36:06.820931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.034 [2024-11-15 10:36:06.821113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:06.034 [2024-11-15 10:36:06.826228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.034 [2024-11-15 10:36:06.826448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.034 [2024-11-15 10:36:06.826643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:06.034 [2024-11-15 10:36:06.831638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.034 [2024-11-15 10:36:06.831829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.034 [2024-11-15 10:36:06.832090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:06.034 [2024-11-15 10:36:06.837254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.034 [2024-11-15 10:36:06.837453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.034 [2024-11-15 10:36:06.837605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:06.034 [2024-11-15 10:36:06.842688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.034 [2024-11-15 10:36:06.842864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.034 [2024-11-15 10:36:06.842991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:06.034 [2024-11-15 10:36:06.847885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.034 [2024-11-15 10:36:06.848070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.034 [2024-11-15 10:36:06.848228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:06.034 [2024-11-15 10:36:06.853079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.034 [2024-11-15 10:36:06.853241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.034 [2024-11-15 10:36:06.853268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:06.034 [2024-11-15 10:36:06.858223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.034 [2024-11-15 10:36:06.858264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.034 [2024-11-15 10:36:06.858279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:06.034 [2024-11-15 10:36:06.863169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.034 [2024-11-15 10:36:06.863206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.034 [2024-11-15 10:36:06.863232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:06.034 [2024-11-15 10:36:06.868116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.034 [2024-11-15 10:36:06.868153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.034 [2024-11-15 10:36:06.868176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:06.034 [2024-11-15 10:36:06.873345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.034 [2024-11-15 10:36:06.873384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.034 [2024-11-15 10:36:06.873397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:06.034 [2024-11-15 10:36:06.878458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.034 [2024-11-15 10:36:06.878692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.034 [2024-11-15 10:36:06.878720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:06.294 [2024-11-15 10:36:06.883666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.294 [2024-11-15 10:36:06.883707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.294 [2024-11-15 10:36:06.883722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:06.294 [2024-11-15 10:36:06.888733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.294 [2024-11-15 10:36:06.888777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.294 [2024-11-15 10:36:06.888791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:06.294 [2024-11-15 10:36:06.893870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.294 [2024-11-15 10:36:06.894097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.294 [2024-11-15 10:36:06.894116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:06.294 [2024-11-15 10:36:06.899252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.294 [2024-11-15 10:36:06.899302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.294 [2024-11-15 10:36:06.899315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:06.294 [2024-11-15 10:36:06.904408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.294 [2024-11-15 10:36:06.904462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.294 [2024-11-15 10:36:06.904477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:06.294 [2024-11-15 10:36:06.909533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.294 [2024-11-15 10:36:06.909794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.294 [2024-11-15 10:36:06.909812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:06.294 [2024-11-15 10:36:06.914976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.294 [2024-11-15 10:36:06.915026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.294 [2024-11-15 10:36:06.915041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:06.294 [2024-11-15 10:36:06.920248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.295 [2024-11-15 10:36:06.920323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.295 [2024-11-15 10:36:06.920336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:06.295 [2024-11-15 10:36:06.925217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.295 [2024-11-15 10:36:06.925272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.295 [2024-11-15 10:36:06.925285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:06.295 [2024-11-15 10:36:06.930367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.295 [2024-11-15 10:36:06.930406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.295 [2024-11-15 10:36:06.930419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:06.295 [2024-11-15 10:36:06.935596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.295 [2024-11-15 10:36:06.935878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.295 [2024-11-15 10:36:06.935895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:06.295 [2024-11-15 10:36:06.941130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.295 [2024-11-15 10:36:06.941170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.295 [2024-11-15 10:36:06.941183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:06.295 [2024-11-15 10:36:06.946118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.295 [2024-11-15 10:36:06.946157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.295 [2024-11-15 10:36:06.946172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:06.295 [2024-11-15 10:36:06.951214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.295 [2024-11-15 10:36:06.951252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.295 [2024-11-15 10:36:06.951266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:06.295 [2024-11-15 10:36:06.956314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.295 [2024-11-15 10:36:06.956360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.295 [2024-11-15 10:36:06.956386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:06.295 [2024-11-15 10:36:06.961256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.295 [2024-11-15 10:36:06.961295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.295 [2024-11-15 10:36:06.961310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:06.295 [2024-11-15 10:36:06.966184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.295 [2024-11-15 10:36:06.966431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.295 [2024-11-15 10:36:06.966450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:06.295 [2024-11-15 10:36:06.971282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.295 [2024-11-15 10:36:06.971323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.295 [2024-11-15 10:36:06.971338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:06.295 [2024-11-15 10:36:06.976267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.295 [2024-11-15 10:36:06.976305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.295 [2024-11-15 10:36:06.976319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:06.295 [2024-11-15 10:36:06.981248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.295 [2024-11-15 10:36:06.981296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.295 [2024-11-15 10:36:06.981312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:06.295 [2024-11-15 10:36:06.986430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.295 [2024-11-15 10:36:06.986467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.295 [2024-11-15 10:36:06.986481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:06.295 [2024-11-15 10:36:06.991447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.295 [2024-11-15 10:36:06.991487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.295 [2024-11-15 10:36:06.991501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:06.295 [2024-11-15 10:36:06.996515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.295 [2024-11-15 10:36:06.996760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.295 [2024-11-15 10:36:06.996787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:06.295 [2024-11-15 10:36:07.001851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.295 [2024-11-15 10:36:07.001891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.295 [2024-11-15 10:36:07.001914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:06.295 [2024-11-15 10:36:07.006923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.295 [2024-11-15 10:36:07.006962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.296 [2024-11-15 10:36:07.006976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:06.296 [2024-11-15 10:36:07.011827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.296 [2024-11-15 10:36:07.012025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.296 [2024-11-15 10:36:07.012045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:06.296 [2024-11-15 10:36:07.017019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.296 [2024-11-15 10:36:07.017087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.296 [2024-11-15 10:36:07.017117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:06.296 [2024-11-15 10:36:07.021872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.296 [2024-11-15 10:36:07.021915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.296 [2024-11-15 10:36:07.021929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:06.296 [2024-11-15 10:36:07.026798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.296 [2024-11-15 10:36:07.026838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.296 [2024-11-15 10:36:07.026851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:06.296 [2024-11-15 10:36:07.031754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.296 [2024-11-15 10:36:07.031943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.296 [2024-11-15 10:36:07.031961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:06.296 [2024-11-15 10:36:07.036891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.296 [2024-11-15 10:36:07.036930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.296 [2024-11-15 10:36:07.036959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:06.296 [2024-11-15 10:36:07.041839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.296 [2024-11-15 10:36:07.041877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.296 [2024-11-15 10:36:07.041891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:06.296 [2024-11-15 10:36:07.046876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.296 [2024-11-15 10:36:07.046914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.296 [2024-11-15 10:36:07.046928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:06.296 6029.00 IOPS, 753.62 MiB/s [2024-11-15T10:36:07.149Z] [2024-11-15 10:36:07.052750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.296 [2024-11-15 10:36:07.052795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.296 [2024-11-15 10:36:07.052809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:06.296 [2024-11-15 10:36:07.057831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.296 [2024-11-15 10:36:07.058123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.296 [2024-11-15 10:36:07.058142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:06.296 [2024-11-15 10:36:07.063224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.296 [2024-11-15 10:36:07.063261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.296 [2024-11-15 10:36:07.063274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:06.296 [2024-11-15 10:36:07.068228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.296 [2024-11-15 10:36:07.068264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.296 [2024-11-15 10:36:07.068277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:06.296 [2024-11-15 10:36:07.073069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.296 [2024-11-15 10:36:07.073118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.296 [2024-11-15 10:36:07.073131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:06.296 [2024-11-15 10:36:07.077856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.296 [2024-11-15 10:36:07.077893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.296 [2024-11-15 10:36:07.077906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:06.296 [2024-11-15 10:36:07.082635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.296 [2024-11-15 10:36:07.082687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.296 [2024-11-15 10:36:07.082701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:06.296 [2024-11-15 10:36:07.087503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.296 [2024-11-15 10:36:07.087549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.296 [2024-11-15 10:36:07.087563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:06.296 [2024-11-15 10:36:07.092478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.297 [2024-11-15 10:36:07.092799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.297 [2024-11-15 10:36:07.092817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:06.297 [2024-11-15 10:36:07.097893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.297 [2024-11-15 10:36:07.097934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.297 [2024-11-15 10:36:07.097948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:06.297 [2024-11-15 10:36:07.103005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.297 [2024-11-15 10:36:07.103072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.297 [2024-11-15 10:36:07.103100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:06.297 [2024-11-15 10:36:07.107881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.297 [2024-11-15 10:36:07.108127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.297 [2024-11-15 10:36:07.108143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:06.297 [2024-11-15 10:36:07.113268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.297 [2024-11-15 10:36:07.113463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.297 [2024-11-15 10:36:07.113588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:06.297 [2024-11-15 10:36:07.118741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.297 [2024-11-15 10:36:07.118930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.297 [2024-11-15 10:36:07.119203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:06.297 [2024-11-15 10:36:07.124366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.297 [2024-11-15 10:36:07.124606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.297 [2024-11-15 10:36:07.124773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:06.297 [2024-11-15 10:36:07.129656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.297 [2024-11-15 10:36:07.129859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.297 [2024-11-15 10:36:07.130210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:06.297 [2024-11-15 10:36:07.135123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.297 [2024-11-15 10:36:07.135313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.297 [2024-11-15 10:36:07.135539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:06.297 [2024-11-15 10:36:07.140559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.297 [2024-11-15 10:36:07.140738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.297 [2024-11-15 10:36:07.140910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:06.557 [2024-11-15 10:36:07.145849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.557 [2024-11-15 10:36:07.146031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.557 [2024-11-15 10:36:07.146187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:06.557 [2024-11-15 10:36:07.151063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.557 [2024-11-15 10:36:07.151242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.557 [2024-11-15 10:36:07.151388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:06.557 [2024-11-15 10:36:07.156508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.557 [2024-11-15 10:36:07.156699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.557 [2024-11-15 10:36:07.156827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:06.557 [2024-11-15 10:36:07.161758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.557 [2024-11-15 10:36:07.161800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.557 [2024-11-15 10:36:07.161814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:06.557 [2024-11-15 10:36:07.166661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.557 [2024-11-15 10:36:07.166704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.557 [2024-11-15 10:36:07.166727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:06.557 [2024-11-15 10:36:07.171565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.557 [2024-11-15 10:36:07.171775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.557 [2024-11-15 10:36:07.171793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:06.557 [2024-11-15 10:36:07.176734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.557 [2024-11-15 10:36:07.176774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.557 [2024-11-15 10:36:07.176788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:06.557 [2024-11-15 10:36:07.181698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.557 [2024-11-15 10:36:07.181736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.557 [2024-11-15 10:36:07.181750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:06.557 [2024-11-15 10:36:07.186607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.557 [2024-11-15 10:36:07.186807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.557 [2024-11-15 10:36:07.186825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:06.557 [2024-11-15 10:36:07.191706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.557 [2024-11-15 10:36:07.191753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.557 [2024-11-15 10:36:07.191767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:06.557 [2024-11-15 10:36:07.196610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.558 [2024-11-15 10:36:07.196649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.558 [2024-11-15 10:36:07.196663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:06.558 [2024-11-15 10:36:07.201455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.558 [2024-11-15 10:36:07.201651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.558 [2024-11-15 10:36:07.201669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:06.558 [2024-11-15 10:36:07.206549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.558 [2024-11-15 10:36:07.206589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.558 [2024-11-15 10:36:07.206603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:06.558 [2024-11-15 10:36:07.211445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.558 [2024-11-15 10:36:07.211484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.558 [2024-11-15 10:36:07.211498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:06.558 [2024-11-15 10:36:07.216407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.558 [2024-11-15 10:36:07.216594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.558 [2024-11-15 10:36:07.216611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:06.558 [2024-11-15 10:36:07.221447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.558 [2024-11-15 10:36:07.221485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.558 [2024-11-15 10:36:07.221498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:06.558 [2024-11-15 10:36:07.226431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.558 [2024-11-15 10:36:07.226469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.558 [2024-11-15 10:36:07.226491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:06.558 [2024-11-15 10:36:07.231407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.558 [2024-11-15 10:36:07.231574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.558 [2024-11-15 10:36:07.231596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:06.558 [2024-11-15 10:36:07.237146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.558 [2024-11-15 10:36:07.237184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.558 [2024-11-15 10:36:07.237198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:06.558 [2024-11-15 10:36:07.242800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.558 [2024-11-15 10:36:07.242840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.558 [2024-11-15 10:36:07.242854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:06.558 [2024-11-15 10:36:07.247843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.558 [2024-11-15 10:36:07.247883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.558 [2024-11-15 10:36:07.247897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:06.558 [2024-11-15 10:36:07.252870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.558 [2024-11-15 10:36:07.252912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.558 [2024-11-15 10:36:07.252926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:06.558 [2024-11-15 10:36:07.257862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.558 [2024-11-15 10:36:07.258115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.558 [2024-11-15 10:36:07.258133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:06.558 [2024-11-15 10:36:07.263039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.558 [2024-11-15 10:36:07.263124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.558 [2024-11-15 10:36:07.263138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:06.558 [2024-11-15 10:36:07.267920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.558 [2024-11-15 10:36:07.267974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.558 [2024-11-15 10:36:07.268002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:06.558 [2024-11-15 10:36:07.272902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.558 [2024-11-15 10:36:07.272942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.558 [2024-11-15 10:36:07.272955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:06.558 [2024-11-15 10:36:07.277752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.558 [2024-11-15 10:36:07.277952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.558 [2024-11-15 10:36:07.277985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:06.558 [2024-11-15 10:36:07.282771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.558 [2024-11-15 10:36:07.282809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.558 [2024-11-15 10:36:07.282822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:06.558 [2024-11-15 10:36:07.287575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.558 [2024-11-15 10:36:07.287615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.558 [2024-11-15 10:36:07.287629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:06.558 [2024-11-15 10:36:07.292339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.558 [2024-11-15 10:36:07.292379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.558 [2024-11-15 10:36:07.292393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:06.558 [2024-11-15 10:36:07.297243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.558 [2024-11-15 10:36:07.297288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.558 [2024-11-15 10:36:07.297302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:06.558 [2024-11-15 10:36:07.302187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.558 [2024-11-15 10:36:07.302224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.558 [2024-11-15 10:36:07.302237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:06.558 [2024-11-15 10:36:07.307049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.558 [2024-11-15 10:36:07.307110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.558 [2024-11-15 10:36:07.307124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:06.558 [2024-11-15 10:36:07.312050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.558 [2024-11-15 10:36:07.312103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.558 [2024-11-15 10:36:07.312118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:06.558 [2024-11-15 10:36:07.317020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.558 [2024-11-15 10:36:07.317317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.558 [2024-11-15 10:36:07.317335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:06.559 [2024-11-15 10:36:07.322415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.559 [2024-11-15 10:36:07.322524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.559 [2024-11-15 10:36:07.322676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:06.559 [2024-11-15 10:36:07.327493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.559 [2024-11-15 10:36:07.327673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.559 [2024-11-15 10:36:07.327845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:06.559 [2024-11-15 10:36:07.332748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.559 [2024-11-15 10:36:07.332943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.559 [2024-11-15 10:36:07.333092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:06.559 [2024-11-15 10:36:07.338097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.559 [2024-11-15 10:36:07.338286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.559 [2024-11-15 10:36:07.338413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:06.559 [2024-11-15 10:36:07.343321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.559 [2024-11-15 10:36:07.343531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.559 [2024-11-15 10:36:07.343658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:06.559 [2024-11-15 10:36:07.348639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.559 [2024-11-15 10:36:07.348819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.559 [2024-11-15 10:36:07.348947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:06.559 [2024-11-15 10:36:07.354192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.559 [2024-11-15 10:36:07.354381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.559 [2024-11-15 10:36:07.354602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:06.559 [2024-11-15 10:36:07.359674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.559 [2024-11-15 10:36:07.359855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.559 [2024-11-15 10:36:07.359990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:06.559 [2024-11-15 10:36:07.365210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.559 [2024-11-15 10:36:07.365387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.559 [2024-11-15 10:36:07.365521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:06.559 [2024-11-15 10:36:07.370489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.559 [2024-11-15 10:36:07.370694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.559 [2024-11-15 10:36:07.370715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:06.559 [2024-11-15 10:36:07.375702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.559 [2024-11-15 10:36:07.375881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.559 [2024-11-15 10:36:07.376008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:06.559 [2024-11-15 10:36:07.380930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.559 [2024-11-15 10:36:07.381127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.559 [2024-11-15 10:36:07.381356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:06.559 [2024-11-15 10:36:07.386322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.559 [2024-11-15 10:36:07.386509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.559 [2024-11-15 10:36:07.386706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:06.559 [2024-11-15 10:36:07.391690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.559 [2024-11-15 10:36:07.391873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.559 [2024-11-15 10:36:07.392005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:06.559 [2024-11-15 10:36:07.397017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.559 [2024-11-15 10:36:07.397215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.559 [2024-11-15 10:36:07.397356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:06.559 [2024-11-15 10:36:07.402289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.559 [2024-11-15 10:36:07.402482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.559 [2024-11-15 10:36:07.402664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:06.819 [2024-11-15 10:36:07.407847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.819 [2024-11-15 10:36:07.408021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.819 [2024-11-15 10:36:07.408248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:06.819 [2024-11-15 10:36:07.413588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.819 [2024-11-15 10:36:07.413783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.819 [2024-11-15 10:36:07.413912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:06.819 [2024-11-15 10:36:07.419054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.819 [2024-11-15 10:36:07.419278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.819 [2024-11-15 10:36:07.419436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:06.819 [2024-11-15 10:36:07.424469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.819 [2024-11-15 10:36:07.424651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.819 [2024-11-15 10:36:07.424782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:06.819 [2024-11-15 10:36:07.429832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.819 [2024-11-15 10:36:07.430040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.819 [2024-11-15 10:36:07.430193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:06.819 [2024-11-15 10:36:07.435427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.819 [2024-11-15 10:36:07.435468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.819 [2024-11-15 10:36:07.435482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:06.819 [2024-11-15 10:36:07.440467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.819 [2024-11-15 10:36:07.440506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.819 [2024-11-15 10:36:07.440529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:06.819 [2024-11-15 10:36:07.445546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.819 [2024-11-15 10:36:07.445585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.819 [2024-11-15 10:36:07.445609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:06.819 [2024-11-15 10:36:07.450512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.819 [2024-11-15 10:36:07.450746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.819 [2024-11-15 10:36:07.450764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:06.819 [2024-11-15 10:36:07.455890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.819 [2024-11-15 10:36:07.455943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.819 [2024-11-15 10:36:07.455957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:06.819 [2024-11-15 10:36:07.460883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.819 [2024-11-15 10:36:07.460922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.819 [2024-11-15 10:36:07.460943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:06.819 [2024-11-15 10:36:07.465893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.819 [2024-11-15 10:36:07.466119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.819 [2024-11-15 10:36:07.466137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:06.819 [2024-11-15 10:36:07.471155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.819 [2024-11-15 10:36:07.471194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.819 [2024-11-15 10:36:07.471207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:06.820 [2024-11-15 10:36:07.476273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.820 [2024-11-15 10:36:07.476309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.820 [2024-11-15 10:36:07.476322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:06.820 [2024-11-15 10:36:07.481007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.820 [2024-11-15 10:36:07.481045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.820 [2024-11-15 10:36:07.481081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:06.820 [2024-11-15 10:36:07.485824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.820 [2024-11-15 10:36:07.486019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.820 [2024-11-15 10:36:07.486036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:06.820 [2024-11-15 10:36:07.490817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.820 [2024-11-15 10:36:07.490858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.820 [2024-11-15 10:36:07.490871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:06.820 [2024-11-15 10:36:07.495720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.820 [2024-11-15 10:36:07.495761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.820 [2024-11-15 10:36:07.495775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:06.820 [2024-11-15 10:36:07.500721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.820 [2024-11-15 10:36:07.500896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.820 [2024-11-15 10:36:07.500913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:06.820 [2024-11-15 10:36:07.505859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.820 [2024-11-15 10:36:07.505899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.820 [2024-11-15 10:36:07.505913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:06.820 [2024-11-15 10:36:07.510916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.820 [2024-11-15 10:36:07.510956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.820 [2024-11-15 10:36:07.510970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:06.820 [2024-11-15 10:36:07.516036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.820 [2024-11-15 10:36:07.516226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.820 [2024-11-15 10:36:07.516244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:06.820 [2024-11-15 10:36:07.521208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.820 [2024-11-15 10:36:07.521248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.820 [2024-11-15 10:36:07.521261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:06.820 [2024-11-15 10:36:07.526154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.820 [2024-11-15 10:36:07.526191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.820 [2024-11-15 10:36:07.526204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:06.820 [2024-11-15 10:36:07.531111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.820 [2024-11-15 10:36:07.531149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.820 [2024-11-15 10:36:07.531162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:06.820 [2024-11-15 10:36:07.536117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.820 [2024-11-15 10:36:07.536155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.820 [2024-11-15 10:36:07.536169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:06.820 [2024-11-15 10:36:07.541209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.820 [2024-11-15 10:36:07.541247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.820 [2024-11-15 10:36:07.541260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:06.820 [2024-11-15 10:36:07.546223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.820 [2024-11-15 10:36:07.546281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.820 [2024-11-15 10:36:07.546295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:06.820 [2024-11-15 10:36:07.551270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.820 [2024-11-15 10:36:07.551307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.820 [2024-11-15 10:36:07.551347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:06.820 [2024-11-15 10:36:07.556465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.820 [2024-11-15 10:36:07.556503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.820 [2024-11-15 10:36:07.556527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:06.820 [2024-11-15 10:36:07.561520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.820 [2024-11-15 10:36:07.561789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.820 [2024-11-15 10:36:07.561809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:06.820 [2024-11-15 10:36:07.566803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.820 [2024-11-15 10:36:07.566843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.820 [2024-11-15 10:36:07.566868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:06.820 [2024-11-15 10:36:07.571869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.820 [2024-11-15 10:36:07.571908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.820 [2024-11-15 10:36:07.571932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:06.821 [2024-11-15 10:36:07.577113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.821 [2024-11-15 10:36:07.577152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.821 [2024-11-15 10:36:07.577165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:06.821 [2024-11-15 10:36:07.582122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.821 [2024-11-15 10:36:07.582161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.821 [2024-11-15 10:36:07.582185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:06.821 [2024-11-15 10:36:07.587087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.821 [2024-11-15 10:36:07.587126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.821 [2024-11-15 10:36:07.587140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:06.821 [2024-11-15 10:36:07.592215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.821 [2024-11-15 10:36:07.592254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.821 [2024-11-15 10:36:07.592278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:06.821 [2024-11-15 10:36:07.597239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.821 [2024-11-15 10:36:07.597276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.821 [2024-11-15 10:36:07.597289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:06.821 [2024-11-15 10:36:07.602155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.821 [2024-11-15 10:36:07.602191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.821 [2024-11-15 10:36:07.602217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:06.821 [2024-11-15 10:36:07.607216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.821 [2024-11-15 10:36:07.607253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.821 [2024-11-15 10:36:07.607266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:06.821 [2024-11-15 10:36:07.612316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.821 [2024-11-15 10:36:07.612354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.821 [2024-11-15 10:36:07.612368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:06.821 [2024-11-15 10:36:07.617373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.821 [2024-11-15 10:36:07.617410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.821 [2024-11-15 10:36:07.617434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:06.821 [2024-11-15 10:36:07.622410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.821 [2024-11-15 10:36:07.622663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.821 [2024-11-15 10:36:07.622680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:06.821 [2024-11-15 10:36:07.627655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.821 [2024-11-15 10:36:07.627695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.821 [2024-11-15 10:36:07.627708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:06.821 [2024-11-15 10:36:07.632781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.821 [2024-11-15 10:36:07.632821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.821 [2024-11-15 10:36:07.632835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:06.821 [2024-11-15 10:36:07.637905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.821 [2024-11-15 10:36:07.638105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.821 [2024-11-15 10:36:07.638125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:06.821 [2024-11-15 10:36:07.643076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.821 [2024-11-15 10:36:07.643129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.821 [2024-11-15 10:36:07.643142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:06.821 [2024-11-15 10:36:07.647988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.821 [2024-11-15 10:36:07.648027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.821 [2024-11-15 10:36:07.648041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:06.821 [2024-11-15 10:36:07.652827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.821 [2024-11-15 10:36:07.652866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.821 [2024-11-15 10:36:07.652889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:06.821 [2024-11-15 10:36:07.657867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.821 [2024-11-15 10:36:07.658086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.821 [2024-11-15 10:36:07.658105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:06.821 [2024-11-15 10:36:07.663035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.821 [2024-11-15 10:36:07.663098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.821 [2024-11-15 10:36:07.663128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:06.821 [2024-11-15 10:36:07.668246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:06.821 [2024-11-15 10:36:07.668285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.821 [2024-11-15 10:36:07.668299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:07.081 [2024-11-15 10:36:07.673273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.081 [2024-11-15 10:36:07.673480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.081 [2024-11-15 10:36:07.673500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:07.081 [2024-11-15 10:36:07.678475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.081 [2024-11-15 10:36:07.678516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.081 [2024-11-15 10:36:07.678538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:07.081 [2024-11-15 10:36:07.683457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.081 [2024-11-15 10:36:07.683504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.081 [2024-11-15 10:36:07.683519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:07.081 [2024-11-15 10:36:07.688522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.081 [2024-11-15 10:36:07.688736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.081 [2024-11-15 10:36:07.688754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:07.081 [2024-11-15 10:36:07.693715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.081 [2024-11-15 10:36:07.693766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.081 [2024-11-15 10:36:07.693790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:07.081 [2024-11-15 10:36:07.698696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.081 [2024-11-15 10:36:07.698749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.081 [2024-11-15 10:36:07.698773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:07.081 [2024-11-15 10:36:07.703624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.081 [2024-11-15 10:36:07.703860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.081 [2024-11-15 10:36:07.703879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:07.081 [2024-11-15 10:36:07.708944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.081 [2024-11-15 10:36:07.708985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.081 [2024-11-15 10:36:07.709009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:07.081 [2024-11-15 10:36:07.713942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.081 [2024-11-15 10:36:07.713996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.081 [2024-11-15 10:36:07.714019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:07.081 [2024-11-15 10:36:07.718948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.081 [2024-11-15 10:36:07.719243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.081 [2024-11-15 10:36:07.719261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:07.081 [2024-11-15 10:36:07.724263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.081 [2024-11-15 10:36:07.724310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.081 [2024-11-15 10:36:07.724324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:07.081 [2024-11-15 10:36:07.729279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.081 [2024-11-15 10:36:07.729320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.081 [2024-11-15 10:36:07.729344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:07.081 [2024-11-15 10:36:07.734153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.081 [2024-11-15 10:36:07.734193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.081 [2024-11-15 10:36:07.734207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:07.081 [2024-11-15 10:36:07.739264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.081 [2024-11-15 10:36:07.739302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.081 [2024-11-15 10:36:07.739315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:07.082 [2024-11-15 10:36:07.744341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.082 [2024-11-15 10:36:07.744385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.082 [2024-11-15 10:36:07.744410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:07.082 [2024-11-15 10:36:07.749290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.082 [2024-11-15 10:36:07.749335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.082 [2024-11-15 10:36:07.749361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:07.082 [2024-11-15 10:36:07.754778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.082 [2024-11-15 10:36:07.754819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.082 [2024-11-15 10:36:07.754834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:07.082 [2024-11-15 10:36:07.760654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.082 [2024-11-15 10:36:07.760910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.082 [2024-11-15 10:36:07.760929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:07.082 [2024-11-15 10:36:07.765868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.082 [2024-11-15 10:36:07.765909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.082 [2024-11-15 10:36:07.765935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:07.082 [2024-11-15 10:36:07.770781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.082 [2024-11-15 10:36:07.770820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.082 [2024-11-15 10:36:07.770845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:07.082 [2024-11-15 10:36:07.775796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.082 [2024-11-15 10:36:07.775835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.082 [2024-11-15 10:36:07.775861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:07.082 [2024-11-15 10:36:07.780747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.082 [2024-11-15 10:36:07.780953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.082 [2024-11-15 10:36:07.780973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:07.082 [2024-11-15 10:36:07.785927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.082 [2024-11-15 10:36:07.785968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.082 [2024-11-15 10:36:07.785993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:07.082 [2024-11-15 10:36:07.790873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.082 [2024-11-15 10:36:07.790912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.082 [2024-11-15 10:36:07.790936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:07.082 [2024-11-15 10:36:07.795758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.082 [2024-11-15 10:36:07.795797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.082 [2024-11-15 10:36:07.795811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:07.082 [2024-11-15 10:36:07.800660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.082 [2024-11-15 10:36:07.800870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.082 [2024-11-15 10:36:07.800887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:07.082 [2024-11-15 10:36:07.805910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.082 [2024-11-15 10:36:07.805950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.082 [2024-11-15 10:36:07.805971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:07.082 [2024-11-15 10:36:07.810909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.082 [2024-11-15 10:36:07.810948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.082 [2024-11-15 10:36:07.810962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:07.082 [2024-11-15 10:36:07.815820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.082 [2024-11-15 10:36:07.815859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.082 [2024-11-15 10:36:07.815883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:07.082 [2024-11-15 10:36:07.820718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.082 [2024-11-15 10:36:07.820757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.082 [2024-11-15 10:36:07.820781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:07.082 [2024-11-15 10:36:07.825770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.082 [2024-11-15 10:36:07.825810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.082 [2024-11-15 10:36:07.825834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:07.082 [2024-11-15 10:36:07.830797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.082 [2024-11-15 10:36:07.831069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.082 [2024-11-15 10:36:07.831093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:07.082 [2024-11-15 10:36:07.835937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.082 [2024-11-15 10:36:07.835977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.082 [2024-11-15 10:36:07.836000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:07.082 [2024-11-15 10:36:07.840929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.082 [2024-11-15 10:36:07.840968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.082 [2024-11-15 10:36:07.840982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:07.082 [2024-11-15 10:36:07.845900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.083 [2024-11-15 10:36:07.845948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.083 [2024-11-15 10:36:07.845961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:07.083 [2024-11-15 10:36:07.850828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.083 [2024-11-15 10:36:07.851027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.083 [2024-11-15 10:36:07.851045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:07.083 [2024-11-15 10:36:07.856063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.083 [2024-11-15 10:36:07.856101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.083 [2024-11-15 10:36:07.856116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:07.083 [2024-11-15 10:36:07.861061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.083 [2024-11-15 10:36:07.861150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.083 [2024-11-15 10:36:07.861165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:07.083 [2024-11-15 10:36:07.866103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.083 [2024-11-15 10:36:07.866171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.083 [2024-11-15 10:36:07.866185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:07.083 [2024-11-15 10:36:07.871315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.083 [2024-11-15 10:36:07.871352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.083 [2024-11-15 10:36:07.871407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:07.083 [2024-11-15 10:36:07.876394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.083 [2024-11-15 10:36:07.876431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.083 [2024-11-15 10:36:07.876444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:07.083 [2024-11-15 10:36:07.881413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.083 [2024-11-15 10:36:07.881449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.083 [2024-11-15 10:36:07.881462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:07.083 [2024-11-15 10:36:07.886365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.083 [2024-11-15 10:36:07.886644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.083 [2024-11-15 10:36:07.886677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:07.083 [2024-11-15 10:36:07.891554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.083 [2024-11-15 10:36:07.891598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.083 [2024-11-15 10:36:07.891612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:07.083 [2024-11-15 10:36:07.896592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.083 [2024-11-15 10:36:07.896633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.083 [2024-11-15 10:36:07.896647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:07.083 [2024-11-15 10:36:07.901521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.083 [2024-11-15 10:36:07.901767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.083 [2024-11-15 10:36:07.901802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:07.083 [2024-11-15 10:36:07.906752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.083 [2024-11-15 10:36:07.906791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.083 [2024-11-15 10:36:07.906803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:07.083 [2024-11-15 10:36:07.911551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.083 [2024-11-15 10:36:07.911590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.083 [2024-11-15 10:36:07.911604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:07.083 [2024-11-15 10:36:07.916622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.083 [2024-11-15 10:36:07.916833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.083 [2024-11-15 10:36:07.916852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:07.083 [2024-11-15 10:36:07.921762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.083 [2024-11-15 10:36:07.921799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.083 [2024-11-15 10:36:07.921823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:07.083 [2024-11-15 10:36:07.926672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.083 [2024-11-15 10:36:07.926727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.083 [2024-11-15 10:36:07.926741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:07.342 [2024-11-15 10:36:07.931821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.342 [2024-11-15 10:36:07.931994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.342 [2024-11-15 10:36:07.932012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:07.342 [2024-11-15 10:36:07.937168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.342 [2024-11-15 10:36:07.937207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.342 [2024-11-15 10:36:07.937236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:07.342 [2024-11-15 10:36:07.942286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.342 [2024-11-15 10:36:07.942325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.342 [2024-11-15 10:36:07.942339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:07.342 [2024-11-15 10:36:07.947183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.342 [2024-11-15 10:36:07.947223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.342 [2024-11-15 10:36:07.947238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:07.342 [2024-11-15 10:36:07.952178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.342 [2024-11-15 10:36:07.952217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.342 [2024-11-15 10:36:07.952231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:07.342 [2024-11-15 10:36:07.957185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.342 [2024-11-15 10:36:07.957224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.343 [2024-11-15 10:36:07.957238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:07.343 [2024-11-15 10:36:07.961978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.343 [2024-11-15 10:36:07.962016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.343 [2024-11-15 10:36:07.962030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:07.343 [2024-11-15 10:36:07.966977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.343 [2024-11-15 10:36:07.967233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.343 [2024-11-15 10:36:07.967251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:07.343 [2024-11-15 10:36:07.972258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.343 [2024-11-15 10:36:07.972312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.343 [2024-11-15 10:36:07.972326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:07.343 [2024-11-15 10:36:07.977358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.343 [2024-11-15 10:36:07.977395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.343 [2024-11-15 10:36:07.977420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:07.343 [2024-11-15 10:36:07.982390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.343 [2024-11-15 10:36:07.982583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.343 [2024-11-15 10:36:07.982601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:07.343 [2024-11-15 10:36:07.987575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.343 [2024-11-15 10:36:07.987615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.343 [2024-11-15 10:36:07.987629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:07.343 [2024-11-15 10:36:07.992492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.343 [2024-11-15 10:36:07.992530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.343 [2024-11-15 10:36:07.992544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:07.343 [2024-11-15 10:36:07.997493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.343 [2024-11-15 10:36:07.997677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.343 [2024-11-15 10:36:07.997695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:07.343 [2024-11-15 10:36:08.002614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.343 [2024-11-15 10:36:08.002683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.343 [2024-11-15 10:36:08.002697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:07.343 [2024-11-15 10:36:08.007595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.343 [2024-11-15 10:36:08.007644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.343 [2024-11-15 10:36:08.007658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:07.343 [2024-11-15 10:36:08.012582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.343 [2024-11-15 10:36:08.012877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.343 [2024-11-15 10:36:08.012896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:07.343 [2024-11-15 10:36:08.017927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.343 [2024-11-15 10:36:08.017967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.343 [2024-11-15 10:36:08.017980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:07.343 [2024-11-15 10:36:08.022717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.343 [2024-11-15 10:36:08.022755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.343 [2024-11-15 10:36:08.022769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:07.343 [2024-11-15 10:36:08.027518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.343 [2024-11-15 10:36:08.027683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.343 [2024-11-15 10:36:08.027716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:07.343 [2024-11-15 10:36:08.032492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.343 [2024-11-15 10:36:08.032529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.343 [2024-11-15 10:36:08.032553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:07.343 [2024-11-15 10:36:08.037263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.343 [2024-11-15 10:36:08.037327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.343 [2024-11-15 10:36:08.037342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:07.343 [2024-11-15 10:36:08.042290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.343 [2024-11-15 10:36:08.042331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.343 [2024-11-15 10:36:08.042344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:07.343 [2024-11-15 10:36:08.048954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e6da90) 00:18:07.343 [2024-11-15 10:36:08.048994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.343 [2024-11-15 10:36:08.049007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:07.343 6045.00 IOPS, 755.62 MiB/s 00:18:07.343 Latency(us) 00:18:07.343 [2024-11-15T10:36:08.196Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:07.343 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:18:07.343 nvme0n1 : 2.00 6047.34 755.92 0.00 0.00 2641.90 2234.18 9413.35 00:18:07.343 [2024-11-15T10:36:08.197Z] =================================================================================================================== 00:18:07.344 [2024-11-15T10:36:08.197Z] Total : 6047.34 755.92 0.00 0.00 2641.90 2234.18 9413.35 00:18:07.344 { 00:18:07.344 "results": [ 00:18:07.344 { 00:18:07.344 "job": "nvme0n1", 00:18:07.344 "core_mask": "0x2", 00:18:07.344 "workload": "randread", 00:18:07.344 "status": "finished", 00:18:07.344 "queue_depth": 16, 00:18:07.344 "io_size": 131072, 00:18:07.344 "runtime": 2.004352, 00:18:07.344 "iops": 6047.340986014433, 00:18:07.344 "mibps": 755.9176232518041, 00:18:07.344 "io_failed": 0, 00:18:07.344 "io_timeout": 0, 00:18:07.344 "avg_latency_us": 2641.9011068693703, 00:18:07.344 "min_latency_us": 2234.181818181818, 00:18:07.344 "max_latency_us": 9413.352727272728 00:18:07.344 } 00:18:07.344 ], 00:18:07.344 "core_count": 1 00:18:07.344 } 00:18:07.344 10:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:07.344 10:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:07.344 10:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:07.344 | .driver_specific 00:18:07.344 | .nvme_error 00:18:07.344 | .status_code 00:18:07.344 | .command_transient_transport_error' 00:18:07.344 10:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:07.602 10:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 391 > 0 )) 00:18:07.602 10:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80505 00:18:07.603 10:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 80505 ']' 00:18:07.603 10:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 80505 00:18:07.603 10:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:18:07.603 10:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:07.603 10:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80505 00:18:07.603 killing process with pid 80505 00:18:07.603 Received shutdown signal, test time was about 2.000000 seconds 00:18:07.603 00:18:07.603 Latency(us) 00:18:07.603 [2024-11-15T10:36:08.456Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:07.603 [2024-11-15T10:36:08.456Z] =================================================================================================================== 00:18:07.603 [2024-11-15T10:36:08.456Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:07.603 10:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:07.603 10:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:07.603 10:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80505' 00:18:07.603 10:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 80505 00:18:07.603 10:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 80505 00:18:07.863 10:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:18:07.863 10:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:07.863 10:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:18:07.863 10:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:18:07.863 10:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:18:07.863 10:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80558 00:18:07.863 10:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:18:07.863 10:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80558 /var/tmp/bperf.sock 00:18:07.863 10:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 80558 ']' 00:18:07.863 10:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:07.863 10:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:07.863 10:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:07.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:07.863 10:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:07.863 10:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:08.129 [2024-11-15 10:36:08.735744] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:18:08.129 [2024-11-15 10:36:08.736706] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80558 ] 00:18:08.130 [2024-11-15 10:36:08.879087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.130 [2024-11-15 10:36:08.960578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:08.388 [2024-11-15 10:36:09.036507] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:08.955 10:36:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:08.955 10:36:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:18:08.955 10:36:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:08.955 10:36:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:09.245 10:36:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:09.245 10:36:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.245 10:36:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:09.245 10:36:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.245 10:36:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:09.245 10:36:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:09.813 nvme0n1 00:18:09.813 10:36:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:18:09.813 10:36:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.813 10:36:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:09.813 10:36:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.813 10:36:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:09.813 10:36:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:09.813 Running I/O for 2 seconds... 00:18:09.813 [2024-11-15 10:36:10.566438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ef7538 00:18:09.814 [2024-11-15 10:36:10.568083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:8295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.814 [2024-11-15 10:36:10.568263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.814 [2024-11-15 10:36:10.582821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ef7da8 00:18:09.814 [2024-11-15 10:36:10.584576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:14809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.814 [2024-11-15 10:36:10.584616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:09.814 [2024-11-15 10:36:10.599240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ef8618 00:18:09.814 [2024-11-15 10:36:10.600789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.814 [2024-11-15 10:36:10.600827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:09.814 [2024-11-15 10:36:10.615438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ef8e88 00:18:09.814 [2024-11-15 10:36:10.616952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:23262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.814 [2024-11-15 10:36:10.616989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:09.814 [2024-11-15 10:36:10.631634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ef96f8 00:18:09.814 [2024-11-15 10:36:10.633134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:10113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.814 [2024-11-15 10:36:10.633169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:09.814 [2024-11-15 10:36:10.647821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ef9f68 00:18:09.814 [2024-11-15 10:36:10.649317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:8276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.814 [2024-11-15 10:36:10.649352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:09.814 [2024-11-15 10:36:10.664124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016efa7d8 00:18:10.074 [2024-11-15 10:36:10.665598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:24743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.074 [2024-11-15 10:36:10.665638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:10.074 [2024-11-15 10:36:10.680507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016efb048 00:18:10.074 [2024-11-15 10:36:10.681942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.074 [2024-11-15 10:36:10.681980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:10.074 [2024-11-15 10:36:10.697004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016efb8b8 00:18:10.074 [2024-11-15 10:36:10.698487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.074 [2024-11-15 10:36:10.698674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:10.074 [2024-11-15 10:36:10.713826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016efc128 00:18:10.074 [2024-11-15 10:36:10.715320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.074 [2024-11-15 10:36:10.715511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:10.074 [2024-11-15 10:36:10.730237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016efc998 00:18:10.074 [2024-11-15 10:36:10.731676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.074 [2024-11-15 10:36:10.731729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:10.074 [2024-11-15 10:36:10.746387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016efd208 00:18:10.074 [2024-11-15 10:36:10.747804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.074 [2024-11-15 10:36:10.747841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:10.074 [2024-11-15 10:36:10.762728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016efda78 00:18:10.074 [2024-11-15 10:36:10.764204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.074 [2024-11-15 10:36:10.764241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:10.074 [2024-11-15 10:36:10.779142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016efe2e8 00:18:10.074 [2024-11-15 10:36:10.780553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.074 [2024-11-15 10:36:10.780587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:10.074 [2024-11-15 10:36:10.795231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016efeb58 00:18:10.074 [2024-11-15 10:36:10.796613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.074 [2024-11-15 10:36:10.796679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:10.074 [2024-11-15 10:36:10.817933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016efef90 00:18:10.074 [2024-11-15 10:36:10.820596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.074 [2024-11-15 10:36:10.820630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:10.074 [2024-11-15 10:36:10.834218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016efeb58 00:18:10.074 [2024-11-15 10:36:10.836896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.074 [2024-11-15 10:36:10.836935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:10.074 [2024-11-15 10:36:10.851117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016efe2e8 00:18:10.074 [2024-11-15 10:36:10.853833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.074 [2024-11-15 10:36:10.853871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:10.074 [2024-11-15 10:36:10.867755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016efda78 00:18:10.074 [2024-11-15 10:36:10.870282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.074 [2024-11-15 10:36:10.870325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:10.074 [2024-11-15 10:36:10.884165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016efd208 00:18:10.074 [2024-11-15 10:36:10.886618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:12872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.074 [2024-11-15 10:36:10.886655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:10.074 [2024-11-15 10:36:10.900698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016efc998 00:18:10.074 [2024-11-15 10:36:10.903307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:24503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.074 [2024-11-15 10:36:10.903346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:10.074 [2024-11-15 10:36:10.917891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016efc128 00:18:10.074 [2024-11-15 10:36:10.920494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:17041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.074 [2024-11-15 10:36:10.920719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:10.333 [2024-11-15 10:36:10.934830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016efb8b8 00:18:10.333 [2024-11-15 10:36:10.937356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.333 [2024-11-15 10:36:10.937399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:10.333 [2024-11-15 10:36:10.951312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016efb048 00:18:10.333 [2024-11-15 10:36:10.953703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.333 [2024-11-15 10:36:10.953741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:10.333 [2024-11-15 10:36:10.968824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016efa7d8 00:18:10.333 [2024-11-15 10:36:10.971505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:18567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.333 [2024-11-15 10:36:10.971675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:10.333 [2024-11-15 10:36:10.985941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ef9f68 00:18:10.333 [2024-11-15 10:36:10.988380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:10848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.333 [2024-11-15 10:36:10.988468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:10.333 [2024-11-15 10:36:11.002875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ef96f8 00:18:10.333 [2024-11-15 10:36:11.005345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:9312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.333 [2024-11-15 10:36:11.005582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:10.333 [2024-11-15 10:36:11.019751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ef8e88 00:18:10.333 [2024-11-15 10:36:11.022470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:12458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.333 [2024-11-15 10:36:11.022672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:10.333 [2024-11-15 10:36:11.037026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ef8618 00:18:10.333 [2024-11-15 10:36:11.039547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:1830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.333 [2024-11-15 10:36:11.039735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:10.333 [2024-11-15 10:36:11.054395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ef7da8 00:18:10.333 [2024-11-15 10:36:11.056908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:3528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.333 [2024-11-15 10:36:11.057121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:10.333 [2024-11-15 10:36:11.071174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ef7538 00:18:10.333 [2024-11-15 10:36:11.073517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.333 [2024-11-15 10:36:11.073729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:10.333 [2024-11-15 10:36:11.086971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ef6cc8 00:18:10.333 [2024-11-15 10:36:11.089344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.333 [2024-11-15 10:36:11.089375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:10.333 [2024-11-15 10:36:11.102438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ef6458 00:18:10.333 [2024-11-15 10:36:11.104872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:13191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.333 [2024-11-15 10:36:11.104908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:10.333 [2024-11-15 10:36:11.118554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ef5be8 00:18:10.333 [2024-11-15 10:36:11.120815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:18343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.333 [2024-11-15 10:36:11.120866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:10.333 [2024-11-15 10:36:11.135305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ef5378 00:18:10.333 [2024-11-15 10:36:11.137637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:7488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.333 [2024-11-15 10:36:11.137687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:10.333 [2024-11-15 10:36:11.151981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ef4b08 00:18:10.333 [2024-11-15 10:36:11.154160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.333 [2024-11-15 10:36:11.154404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:10.333 [2024-11-15 10:36:11.168221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ef4298 00:18:10.333 [2024-11-15 10:36:11.170489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:8074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.333 [2024-11-15 10:36:11.170519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:10.593 [2024-11-15 10:36:11.184156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ef3a28 00:18:10.593 [2024-11-15 10:36:11.186227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.593 [2024-11-15 10:36:11.186260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:10.593 [2024-11-15 10:36:11.199463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ef31b8 00:18:10.593 [2024-11-15 10:36:11.201791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.593 [2024-11-15 10:36:11.201826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:10.593 [2024-11-15 10:36:11.214969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ef2948 00:18:10.593 [2024-11-15 10:36:11.217134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:2524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.593 [2024-11-15 10:36:11.217163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:10.593 [2024-11-15 10:36:11.230388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ef20d8 00:18:10.593 [2024-11-15 10:36:11.232652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:24832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.594 [2024-11-15 10:36:11.232686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:10.594 [2024-11-15 10:36:11.246741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ef1868 00:18:10.594 [2024-11-15 10:36:11.248877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.594 [2024-11-15 10:36:11.248913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:10.594 [2024-11-15 10:36:11.262427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ef0ff8 00:18:10.594 [2024-11-15 10:36:11.264420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:21959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.594 [2024-11-15 10:36:11.264455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:10.594 [2024-11-15 10:36:11.278294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ef0788 00:18:10.594 [2024-11-15 10:36:11.280507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.594 [2024-11-15 10:36:11.280542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:10.594 [2024-11-15 10:36:11.294819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016eeff18 00:18:10.594 [2024-11-15 10:36:11.296865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:9133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.594 [2024-11-15 10:36:11.296898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:10.594 [2024-11-15 10:36:11.310298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016eef6a8 00:18:10.594 [2024-11-15 10:36:11.312330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.594 [2024-11-15 10:36:11.312379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:10.594 [2024-11-15 10:36:11.325821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016eeee38 00:18:10.594 [2024-11-15 10:36:11.327871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:22741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.594 [2024-11-15 10:36:11.327906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:10.594 [2024-11-15 10:36:11.341436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016eee5c8 00:18:10.594 [2024-11-15 10:36:11.343316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:17043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.594 [2024-11-15 10:36:11.343348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:10.594 [2024-11-15 10:36:11.356948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016eedd58 00:18:10.594 [2024-11-15 10:36:11.358880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:18240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.594 [2024-11-15 10:36:11.358915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:10.594 [2024-11-15 10:36:11.372818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016eed4e8 00:18:10.594 [2024-11-15 10:36:11.374826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:17763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.594 [2024-11-15 10:36:11.374862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:10.594 [2024-11-15 10:36:11.388630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016eecc78 00:18:10.594 [2024-11-15 10:36:11.390515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:11772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.594 [2024-11-15 10:36:11.390549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:10.594 [2024-11-15 10:36:11.404365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016eec408 00:18:10.594 [2024-11-15 10:36:11.406275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:3212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.594 [2024-11-15 10:36:11.406307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:10.594 [2024-11-15 10:36:11.421027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016eebb98 00:18:10.594 [2024-11-15 10:36:11.422976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:8539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.594 [2024-11-15 10:36:11.423057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:10.594 [2024-11-15 10:36:11.437150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016eeb328 00:18:10.594 [2024-11-15 10:36:11.438914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:24286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.594 [2024-11-15 10:36:11.438948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:10.854 [2024-11-15 10:36:11.452911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016eeaab8 00:18:10.854 [2024-11-15 10:36:11.454875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:21016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.854 [2024-11-15 10:36:11.454921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:10.854 [2024-11-15 10:36:11.469622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016eea248 00:18:10.854 [2024-11-15 10:36:11.471941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:9424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.854 [2024-11-15 10:36:11.472149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:10.854 [2024-11-15 10:36:11.486562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ee99d8 00:18:10.854 [2024-11-15 10:36:11.488369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:25487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.854 [2024-11-15 10:36:11.488406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:10.854 [2024-11-15 10:36:11.503138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ee9168 00:18:10.854 [2024-11-15 10:36:11.504879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:49 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.854 [2024-11-15 10:36:11.504916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:10.854 [2024-11-15 10:36:11.518992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ee88f8 00:18:10.854 [2024-11-15 10:36:11.520887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:8432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.854 [2024-11-15 10:36:11.520918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:10.854 [2024-11-15 10:36:11.535072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ee8088 00:18:10.854 [2024-11-15 10:36:11.536778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.855 [2024-11-15 10:36:11.536812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:10.855 [2024-11-15 10:36:11.550892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016e 15435.00 IOPS, 60.29 MiB/s [2024-11-15T10:36:11.708Z] e7818 00:18:10.855 [2024-11-15 10:36:11.552776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:10592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.855 [2024-11-15 10:36:11.552814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:10.855 [2024-11-15 10:36:11.567527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ee6fa8 00:18:10.855 [2024-11-15 10:36:11.569228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:11336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.855 [2024-11-15 10:36:11.569409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:10.855 [2024-11-15 10:36:11.583891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ee6738 00:18:10.855 [2024-11-15 10:36:11.585681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.855 [2024-11-15 10:36:11.585710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:10.855 [2024-11-15 10:36:11.599999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ee5ec8 00:18:10.855 [2024-11-15 10:36:11.601648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.855 [2024-11-15 10:36:11.601683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:10.855 [2024-11-15 10:36:11.615618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ee5658 00:18:10.855 [2024-11-15 10:36:11.617311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:16958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.855 [2024-11-15 10:36:11.617361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.855 [2024-11-15 10:36:11.631950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ee4de8 00:18:10.855 [2024-11-15 10:36:11.633572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.855 [2024-11-15 10:36:11.633604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:10.855 [2024-11-15 10:36:11.647854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ee4578 00:18:10.855 [2024-11-15 10:36:11.649585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:24746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.855 [2024-11-15 10:36:11.649620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:10.855 [2024-11-15 10:36:11.664296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ee3d08 00:18:10.855 [2024-11-15 10:36:11.665831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.855 [2024-11-15 10:36:11.665867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:10.855 [2024-11-15 10:36:11.680748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ee3498 00:18:10.855 [2024-11-15 10:36:11.682243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:22409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.855 [2024-11-15 10:36:11.682279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:10.855 [2024-11-15 10:36:11.697138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ee2c28 00:18:10.855 [2024-11-15 10:36:11.698593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.855 [2024-11-15 10:36:11.698764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:11.114 [2024-11-15 10:36:11.713857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ee23b8 00:18:11.114 [2024-11-15 10:36:11.715368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.114 [2024-11-15 10:36:11.715404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:11.114 [2024-11-15 10:36:11.730404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ee1b48 00:18:11.114 [2024-11-15 10:36:11.731864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.114 [2024-11-15 10:36:11.731903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:11.114 [2024-11-15 10:36:11.747222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ee12d8 00:18:11.114 [2024-11-15 10:36:11.748690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.114 [2024-11-15 10:36:11.748725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:11.114 [2024-11-15 10:36:11.764243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ee0a68 00:18:11.114 [2024-11-15 10:36:11.765944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.114 [2024-11-15 10:36:11.765996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:11.114 [2024-11-15 10:36:11.780856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ee01f8 00:18:11.115 [2024-11-15 10:36:11.782243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:9785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.115 [2024-11-15 10:36:11.782279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:11.115 [2024-11-15 10:36:11.797396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016edf988 00:18:11.115 [2024-11-15 10:36:11.798983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.115 [2024-11-15 10:36:11.799019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:11.115 [2024-11-15 10:36:11.813635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016edf118 00:18:11.115 [2024-11-15 10:36:11.815068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.115 [2024-11-15 10:36:11.815117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:11.115 [2024-11-15 10:36:11.829912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ede8a8 00:18:11.115 [2024-11-15 10:36:11.831298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.115 [2024-11-15 10:36:11.831334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:11.115 [2024-11-15 10:36:11.846229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ede038 00:18:11.115 [2024-11-15 10:36:11.847548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.115 [2024-11-15 10:36:11.847587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:11.115 [2024-11-15 10:36:11.869557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ede038 00:18:11.115 [2024-11-15 10:36:11.872199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.115 [2024-11-15 10:36:11.872239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:11.115 [2024-11-15 10:36:11.886147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ede8a8 00:18:11.115 [2024-11-15 10:36:11.888673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.115 [2024-11-15 10:36:11.888713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:11.115 [2024-11-15 10:36:11.902547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016edf118 00:18:11.115 [2024-11-15 10:36:11.905176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:22887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.115 [2024-11-15 10:36:11.905354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:11.115 [2024-11-15 10:36:11.918856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016edf988 00:18:11.115 [2024-11-15 10:36:11.921393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.115 [2024-11-15 10:36:11.921429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:11.115 [2024-11-15 10:36:11.934616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ee01f8 00:18:11.115 [2024-11-15 10:36:11.937154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:15503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.115 [2024-11-15 10:36:11.937190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:11.115 [2024-11-15 10:36:11.950993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ee0a68 00:18:11.115 [2024-11-15 10:36:11.953401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:10357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.115 [2024-11-15 10:36:11.953442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:11.374 [2024-11-15 10:36:11.966951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ee12d8 00:18:11.374 [2024-11-15 10:36:11.969588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:12314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.374 [2024-11-15 10:36:11.969624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:11.374 [2024-11-15 10:36:11.983396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ee1b48 00:18:11.374 [2024-11-15 10:36:11.985767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.374 [2024-11-15 10:36:11.985801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:11.374 [2024-11-15 10:36:12.000137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ee23b8 00:18:11.374 [2024-11-15 10:36:12.003310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.374 [2024-11-15 10:36:12.003343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:11.374 [2024-11-15 10:36:12.018299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ee2c28 00:18:11.374 [2024-11-15 10:36:12.020717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:9328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.374 [2024-11-15 10:36:12.020755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:11.374 [2024-11-15 10:36:12.034884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ee3498 00:18:11.374 [2024-11-15 10:36:12.037300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:10832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.374 [2024-11-15 10:36:12.037337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:11.374 [2024-11-15 10:36:12.051557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ee3d08 00:18:11.374 [2024-11-15 10:36:12.053900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.374 [2024-11-15 10:36:12.053937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:11.374 [2024-11-15 10:36:12.067942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ee4578 00:18:11.374 [2024-11-15 10:36:12.070242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.374 [2024-11-15 10:36:12.070277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:11.374 [2024-11-15 10:36:12.084498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ee4de8 00:18:11.374 [2024-11-15 10:36:12.086826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:11729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.374 [2024-11-15 10:36:12.086861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:11.374 [2024-11-15 10:36:12.100935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ee5658 00:18:11.374 [2024-11-15 10:36:12.103220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:25208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.374 [2024-11-15 10:36:12.103256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:11.374 [2024-11-15 10:36:12.117608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ee5ec8 00:18:11.374 [2024-11-15 10:36:12.119922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.374 [2024-11-15 10:36:12.119981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:11.374 [2024-11-15 10:36:12.134429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ee6738 00:18:11.374 [2024-11-15 10:36:12.136769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.374 [2024-11-15 10:36:12.136937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:11.374 [2024-11-15 10:36:12.151509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ee6fa8 00:18:11.374 [2024-11-15 10:36:12.153733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.374 [2024-11-15 10:36:12.153775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:11.374 [2024-11-15 10:36:12.168417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ee7818 00:18:11.374 [2024-11-15 10:36:12.170659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:10863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.374 [2024-11-15 10:36:12.170843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:11.374 [2024-11-15 10:36:12.185340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ee8088 00:18:11.374 [2024-11-15 10:36:12.189314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.375 [2024-11-15 10:36:12.189523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:11.375 [2024-11-15 10:36:12.204355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ee88f8 00:18:11.375 [2024-11-15 10:36:12.206903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:25384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.375 [2024-11-15 10:36:12.207118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:11.375 [2024-11-15 10:36:12.221666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ee9168 00:18:11.375 [2024-11-15 10:36:12.224030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.375 [2024-11-15 10:36:12.224263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:11.634 [2024-11-15 10:36:12.238621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ee99d8 00:18:11.634 [2024-11-15 10:36:12.240925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:17794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.634 [2024-11-15 10:36:12.240965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:11.634 [2024-11-15 10:36:12.254750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016eea248 00:18:11.634 [2024-11-15 10:36:12.256912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.634 [2024-11-15 10:36:12.256949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:11.634 [2024-11-15 10:36:12.271626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016eeaab8 00:18:11.634 [2024-11-15 10:36:12.273809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:20444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.634 [2024-11-15 10:36:12.273845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:11.634 [2024-11-15 10:36:12.287990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016eeb328 00:18:11.634 [2024-11-15 10:36:12.290139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:16257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.634 [2024-11-15 10:36:12.290322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:11.634 [2024-11-15 10:36:12.304783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016eebb98 00:18:11.634 [2024-11-15 10:36:12.307093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:17006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.634 [2024-11-15 10:36:12.307303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:11.634 [2024-11-15 10:36:12.321398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016eec408 00:18:11.634 [2024-11-15 10:36:12.323636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:9396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.634 [2024-11-15 10:36:12.323873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:11.634 [2024-11-15 10:36:12.338346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016eecc78 00:18:11.635 [2024-11-15 10:36:12.340631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:6644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.635 [2024-11-15 10:36:12.340823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:11.635 [2024-11-15 10:36:12.355190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016eed4e8 00:18:11.635 [2024-11-15 10:36:12.357344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.635 [2024-11-15 10:36:12.357536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:11.635 [2024-11-15 10:36:12.371724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016eedd58 00:18:11.635 [2024-11-15 10:36:12.373936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.635 [2024-11-15 10:36:12.374151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:11.635 [2024-11-15 10:36:12.388470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016eee5c8 00:18:11.635 [2024-11-15 10:36:12.390521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:80 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.635 [2024-11-15 10:36:12.390722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:11.635 [2024-11-15 10:36:12.405263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016eeee38 00:18:11.635 [2024-11-15 10:36:12.407410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.635 [2024-11-15 10:36:12.407585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:11.635 [2024-11-15 10:36:12.421589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016eef6a8 00:18:11.635 [2024-11-15 10:36:12.423783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.635 [2024-11-15 10:36:12.423996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:11.635 [2024-11-15 10:36:12.438208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016eeff18 00:18:11.635 [2024-11-15 10:36:12.440346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.635 [2024-11-15 10:36:12.440383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:11.635 [2024-11-15 10:36:12.454447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ef0788 00:18:11.635 [2024-11-15 10:36:12.456341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:17438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.635 [2024-11-15 10:36:12.456375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:11.635 [2024-11-15 10:36:12.470271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ef0ff8 00:18:11.635 [2024-11-15 10:36:12.472152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.635 [2024-11-15 10:36:12.472188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:11.895 [2024-11-15 10:36:12.485858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ef1868 00:18:11.895 [2024-11-15 10:36:12.487775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:14344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.895 [2024-11-15 10:36:12.487810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:11.895 [2024-11-15 10:36:12.501561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ef20d8 00:18:11.895 [2024-11-15 10:36:12.503350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:21829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.895 [2024-11-15 10:36:12.503408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:11.895 [2024-11-15 10:36:12.517177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ef2948 00:18:11.895 [2024-11-15 10:36:12.518900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:24567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.895 [2024-11-15 10:36:12.518950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:11.895 [2024-11-15 10:36:12.532674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ef31b8 00:18:11.895 [2024-11-15 10:36:12.534576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.895 [2024-11-15 10:36:12.534608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:11.895 [2024-11-15 10:36:12.548463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f5b0) with pdu=0x200016ef3a28 00:18:11.895 [2024-11-15 10:36:12.550185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.895 [2024-11-15 10:36:12.550219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:11.895 15434.00 IOPS, 60.29 MiB/s 00:18:11.895 Latency(us) 00:18:11.895 [2024-11-15T10:36:12.748Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:11.895 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:11.895 nvme0n1 : 2.01 15411.21 60.20 0.00 0.00 8298.46 3872.58 30742.34 00:18:11.895 [2024-11-15T10:36:12.748Z] =================================================================================================================== 00:18:11.895 [2024-11-15T10:36:12.748Z] Total : 15411.21 60.20 0.00 0.00 8298.46 3872.58 30742.34 00:18:11.895 { 00:18:11.895 "results": [ 00:18:11.895 { 00:18:11.895 "job": "nvme0n1", 00:18:11.895 "core_mask": "0x2", 00:18:11.895 "workload": "randwrite", 00:18:11.895 "status": "finished", 00:18:11.895 "queue_depth": 128, 00:18:11.895 "io_size": 4096, 00:18:11.895 "runtime": 2.011263, 00:18:11.895 "iops": 15411.211760968108, 00:18:11.895 "mibps": 60.20004594128167, 00:18:11.895 "io_failed": 0, 00:18:11.895 "io_timeout": 0, 00:18:11.895 "avg_latency_us": 8298.461209540232, 00:18:11.895 "min_latency_us": 3872.581818181818, 00:18:11.895 "max_latency_us": 30742.34181818182 00:18:11.895 } 00:18:11.895 ], 00:18:11.895 "core_count": 1 00:18:11.895 } 00:18:11.895 10:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:11.895 10:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:11.895 10:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:11.895 | .driver_specific 00:18:11.895 | .nvme_error 00:18:11.895 | .status_code 00:18:11.895 | .command_transient_transport_error' 00:18:11.895 10:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:12.161 10:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 121 > 0 )) 00:18:12.161 10:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80558 00:18:12.161 10:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 80558 ']' 00:18:12.161 10:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 80558 00:18:12.161 10:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:18:12.161 10:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:12.161 10:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80558 00:18:12.161 killing process with pid 80558 00:18:12.161 Received shutdown signal, test time was about 2.000000 seconds 00:18:12.161 00:18:12.161 Latency(us) 00:18:12.161 [2024-11-15T10:36:13.014Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.161 [2024-11-15T10:36:13.014Z] =================================================================================================================== 00:18:12.161 [2024-11-15T10:36:13.014Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:12.161 10:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:12.161 10:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:12.161 10:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80558' 00:18:12.161 10:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 80558 00:18:12.161 10:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 80558 00:18:12.421 10:36:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:18:12.421 10:36:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:12.421 10:36:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:18:12.421 10:36:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:18:12.421 10:36:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:18:12.421 10:36:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:18:12.421 10:36:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80619 00:18:12.421 10:36:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80619 /var/tmp/bperf.sock 00:18:12.421 10:36:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 80619 ']' 00:18:12.421 10:36:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:12.421 10:36:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:12.421 10:36:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:12.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:12.421 10:36:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:12.421 10:36:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:12.421 [2024-11-15 10:36:13.118371] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:18:12.421 [2024-11-15 10:36:13.118641] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80619 ] 00:18:12.421 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:12.421 Zero copy mechanism will not be used. 00:18:12.421 [2024-11-15 10:36:13.260497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.680 [2024-11-15 10:36:13.323002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:12.680 [2024-11-15 10:36:13.377709] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:12.680 10:36:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:12.680 10:36:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:18:12.680 10:36:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:12.680 10:36:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:12.942 10:36:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:12.943 10:36:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.943 10:36:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:12.943 10:36:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.943 10:36:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:12.943 10:36:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:13.511 nvme0n1 00:18:13.511 10:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:18:13.511 10:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.511 10:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:13.511 10:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.511 10:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:13.511 10:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:13.511 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:13.511 Zero copy mechanism will not be used. 00:18:13.511 Running I/O for 2 seconds... 00:18:13.511 [2024-11-15 10:36:14.203252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.511 [2024-11-15 10:36:14.203602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.511 [2024-11-15 10:36:14.203632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:13.511 [2024-11-15 10:36:14.209077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.511 [2024-11-15 10:36:14.209207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.511 [2024-11-15 10:36:14.209233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:13.511 [2024-11-15 10:36:14.214522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.511 [2024-11-15 10:36:14.214607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.511 [2024-11-15 10:36:14.214630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:13.511 [2024-11-15 10:36:14.219975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.511 [2024-11-15 10:36:14.220049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.511 [2024-11-15 10:36:14.220078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:13.511 [2024-11-15 10:36:14.225378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.511 [2024-11-15 10:36:14.225469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.511 [2024-11-15 10:36:14.225491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:13.512 [2024-11-15 10:36:14.230691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.512 [2024-11-15 10:36:14.230783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.512 [2024-11-15 10:36:14.230806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:13.512 [2024-11-15 10:36:14.235880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.512 [2024-11-15 10:36:14.235953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.512 [2024-11-15 10:36:14.235975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:13.512 [2024-11-15 10:36:14.241017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.512 [2024-11-15 10:36:14.241245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.512 [2024-11-15 10:36:14.241268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:13.512 [2024-11-15 10:36:14.246412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.512 [2024-11-15 10:36:14.246488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.512 [2024-11-15 10:36:14.246510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:13.512 [2024-11-15 10:36:14.251685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.512 [2024-11-15 10:36:14.251772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.512 [2024-11-15 10:36:14.251801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:13.512 [2024-11-15 10:36:14.257164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.512 [2024-11-15 10:36:14.257256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.512 [2024-11-15 10:36:14.257278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:13.512 [2024-11-15 10:36:14.262686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.512 [2024-11-15 10:36:14.262776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.512 [2024-11-15 10:36:14.262798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:13.512 [2024-11-15 10:36:14.268195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.512 [2024-11-15 10:36:14.268286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.512 [2024-11-15 10:36:14.268307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:13.512 [2024-11-15 10:36:14.273405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.512 [2024-11-15 10:36:14.273497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.512 [2024-11-15 10:36:14.273518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:13.512 [2024-11-15 10:36:14.278599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.512 [2024-11-15 10:36:14.278684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.512 [2024-11-15 10:36:14.278706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:13.512 [2024-11-15 10:36:14.283764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.512 [2024-11-15 10:36:14.283869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.512 [2024-11-15 10:36:14.283892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:13.512 [2024-11-15 10:36:14.288966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.512 [2024-11-15 10:36:14.289235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.512 [2024-11-15 10:36:14.289258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:13.512 [2024-11-15 10:36:14.294291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.512 [2024-11-15 10:36:14.294365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.512 [2024-11-15 10:36:14.294387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:13.512 [2024-11-15 10:36:14.299287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.512 [2024-11-15 10:36:14.299434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.512 [2024-11-15 10:36:14.299456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:13.512 [2024-11-15 10:36:14.304481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.512 [2024-11-15 10:36:14.304728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.512 [2024-11-15 10:36:14.304750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:13.512 [2024-11-15 10:36:14.310007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.512 [2024-11-15 10:36:14.310157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.512 [2024-11-15 10:36:14.310180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:13.512 [2024-11-15 10:36:14.315296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.512 [2024-11-15 10:36:14.315430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.512 [2024-11-15 10:36:14.315453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:13.512 [2024-11-15 10:36:14.320508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.512 [2024-11-15 10:36:14.320791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.512 [2024-11-15 10:36:14.320813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:13.512 [2024-11-15 10:36:14.325901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.512 [2024-11-15 10:36:14.325990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.512 [2024-11-15 10:36:14.326012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:13.512 [2024-11-15 10:36:14.330954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.512 [2024-11-15 10:36:14.331041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.512 [2024-11-15 10:36:14.331063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:13.512 [2024-11-15 10:36:14.336194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.512 [2024-11-15 10:36:14.336278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.512 [2024-11-15 10:36:14.336299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:13.512 [2024-11-15 10:36:14.341214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.512 [2024-11-15 10:36:14.341294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.512 [2024-11-15 10:36:14.341315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:13.512 [2024-11-15 10:36:14.346597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.512 [2024-11-15 10:36:14.346690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.512 [2024-11-15 10:36:14.346713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:13.513 [2024-11-15 10:36:14.351966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.513 [2024-11-15 10:36:14.352258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.513 [2024-11-15 10:36:14.352287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:13.513 [2024-11-15 10:36:14.357558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.513 [2024-11-15 10:36:14.357664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.513 [2024-11-15 10:36:14.357686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:13.772 [2024-11-15 10:36:14.362872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.772 [2024-11-15 10:36:14.362962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.772 [2024-11-15 10:36:14.362984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:13.772 [2024-11-15 10:36:14.368041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.772 [2024-11-15 10:36:14.368309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.772 [2024-11-15 10:36:14.368331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:13.772 [2024-11-15 10:36:14.373679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.772 [2024-11-15 10:36:14.373792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.772 [2024-11-15 10:36:14.373814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:13.772 [2024-11-15 10:36:14.379056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.772 [2024-11-15 10:36:14.379162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.772 [2024-11-15 10:36:14.379185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:13.772 [2024-11-15 10:36:14.384361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.772 [2024-11-15 10:36:14.384467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.772 [2024-11-15 10:36:14.384488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:13.772 [2024-11-15 10:36:14.389564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.773 [2024-11-15 10:36:14.389670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.773 [2024-11-15 10:36:14.389692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:13.773 [2024-11-15 10:36:14.394848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.773 [2024-11-15 10:36:14.394939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.773 [2024-11-15 10:36:14.394960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:13.773 [2024-11-15 10:36:14.400021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.773 [2024-11-15 10:36:14.400289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.773 [2024-11-15 10:36:14.400311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:13.773 [2024-11-15 10:36:14.405339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.773 [2024-11-15 10:36:14.405426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.773 [2024-11-15 10:36:14.405446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:13.773 [2024-11-15 10:36:14.410424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.773 [2024-11-15 10:36:14.410523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.773 [2024-11-15 10:36:14.410545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:13.773 [2024-11-15 10:36:14.415580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.773 [2024-11-15 10:36:14.415861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.773 [2024-11-15 10:36:14.415883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:13.773 [2024-11-15 10:36:14.420992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.773 [2024-11-15 10:36:14.421089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.773 [2024-11-15 10:36:14.421110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:13.773 [2024-11-15 10:36:14.426116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.773 [2024-11-15 10:36:14.426186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.773 [2024-11-15 10:36:14.426207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:13.773 [2024-11-15 10:36:14.431208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.773 [2024-11-15 10:36:14.431285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.773 [2024-11-15 10:36:14.431322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:13.773 [2024-11-15 10:36:14.436480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.773 [2024-11-15 10:36:14.436554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.773 [2024-11-15 10:36:14.436576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:13.773 [2024-11-15 10:36:14.441680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.773 [2024-11-15 10:36:14.441777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.773 [2024-11-15 10:36:14.441799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:13.773 [2024-11-15 10:36:14.446980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.773 [2024-11-15 10:36:14.447111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.773 [2024-11-15 10:36:14.447134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:13.773 [2024-11-15 10:36:14.452272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.773 [2024-11-15 10:36:14.452397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.773 [2024-11-15 10:36:14.452419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:13.773 [2024-11-15 10:36:14.457608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.773 [2024-11-15 10:36:14.457701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.773 [2024-11-15 10:36:14.457723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:13.773 [2024-11-15 10:36:14.463028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.773 [2024-11-15 10:36:14.463186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.773 [2024-11-15 10:36:14.463209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:13.773 [2024-11-15 10:36:14.468364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.773 [2024-11-15 10:36:14.468455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.773 [2024-11-15 10:36:14.468477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:13.773 [2024-11-15 10:36:14.473731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.773 [2024-11-15 10:36:14.473808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.773 [2024-11-15 10:36:14.473830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:13.773 [2024-11-15 10:36:14.479049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.773 [2024-11-15 10:36:14.479140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.773 [2024-11-15 10:36:14.479163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:13.773 [2024-11-15 10:36:14.484389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.773 [2024-11-15 10:36:14.484475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.773 [2024-11-15 10:36:14.484497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:13.773 [2024-11-15 10:36:14.489642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.773 [2024-11-15 10:36:14.489726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.773 [2024-11-15 10:36:14.489749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:13.773 [2024-11-15 10:36:14.494853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.773 [2024-11-15 10:36:14.495068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.773 [2024-11-15 10:36:14.495091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:13.773 [2024-11-15 10:36:14.500368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.774 [2024-11-15 10:36:14.500474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.774 [2024-11-15 10:36:14.500495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:13.774 [2024-11-15 10:36:14.505881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.774 [2024-11-15 10:36:14.505961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.774 [2024-11-15 10:36:14.505983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:13.774 [2024-11-15 10:36:14.511134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.774 [2024-11-15 10:36:14.511205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.774 [2024-11-15 10:36:14.511226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:13.774 [2024-11-15 10:36:14.516592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.774 [2024-11-15 10:36:14.516664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.774 [2024-11-15 10:36:14.516684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:13.774 [2024-11-15 10:36:14.521882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.774 [2024-11-15 10:36:14.521960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.774 [2024-11-15 10:36:14.521982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:13.774 [2024-11-15 10:36:14.527097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.774 [2024-11-15 10:36:14.527191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.774 [2024-11-15 10:36:14.527214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:13.774 [2024-11-15 10:36:14.532279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.774 [2024-11-15 10:36:14.532380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.774 [2024-11-15 10:36:14.532403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:13.774 [2024-11-15 10:36:14.537721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.774 [2024-11-15 10:36:14.537795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.774 [2024-11-15 10:36:14.537816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:13.774 [2024-11-15 10:36:14.542952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.774 [2024-11-15 10:36:14.543210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.774 [2024-11-15 10:36:14.543232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:13.774 [2024-11-15 10:36:14.548532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.774 [2024-11-15 10:36:14.548782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.774 [2024-11-15 10:36:14.549008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:13.774 [2024-11-15 10:36:14.553818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.774 [2024-11-15 10:36:14.554067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.774 [2024-11-15 10:36:14.554313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:13.774 [2024-11-15 10:36:14.559094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.774 [2024-11-15 10:36:14.559342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.774 [2024-11-15 10:36:14.559529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:13.774 [2024-11-15 10:36:14.564654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.774 [2024-11-15 10:36:14.564886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.774 [2024-11-15 10:36:14.565134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:13.774 [2024-11-15 10:36:14.570041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.774 [2024-11-15 10:36:14.570299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.774 [2024-11-15 10:36:14.570458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:13.774 [2024-11-15 10:36:14.575397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.774 [2024-11-15 10:36:14.575640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.774 [2024-11-15 10:36:14.575835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:13.774 [2024-11-15 10:36:14.580723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.774 [2024-11-15 10:36:14.580994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.774 [2024-11-15 10:36:14.581181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:13.774 [2024-11-15 10:36:14.586025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.774 [2024-11-15 10:36:14.586290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.774 [2024-11-15 10:36:14.586461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:13.774 [2024-11-15 10:36:14.591395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.774 [2024-11-15 10:36:14.591626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.774 [2024-11-15 10:36:14.591650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:13.774 [2024-11-15 10:36:14.596757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.774 [2024-11-15 10:36:14.596854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.774 [2024-11-15 10:36:14.596875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:13.774 [2024-11-15 10:36:14.601807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.774 [2024-11-15 10:36:14.601880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.774 [2024-11-15 10:36:14.601901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:13.775 [2024-11-15 10:36:14.606968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.775 [2024-11-15 10:36:14.607186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.775 [2024-11-15 10:36:14.607208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:13.775 [2024-11-15 10:36:14.612248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.775 [2024-11-15 10:36:14.612508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.775 [2024-11-15 10:36:14.612752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:13.775 [2024-11-15 10:36:14.617615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:13.775 [2024-11-15 10:36:14.617921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.775 [2024-11-15 10:36:14.618180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:14.035 [2024-11-15 10:36:14.624287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.035 [2024-11-15 10:36:14.624566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.035 [2024-11-15 10:36:14.624733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:14.035 [2024-11-15 10:36:14.630165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.035 [2024-11-15 10:36:14.630427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.035 [2024-11-15 10:36:14.630652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:14.035 [2024-11-15 10:36:14.635497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.035 [2024-11-15 10:36:14.635752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.035 [2024-11-15 10:36:14.635910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:14.035 [2024-11-15 10:36:14.640835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.035 [2024-11-15 10:36:14.641016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.035 [2024-11-15 10:36:14.641192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:14.035 [2024-11-15 10:36:14.646117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.035 [2024-11-15 10:36:14.646353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.035 [2024-11-15 10:36:14.646584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:14.035 [2024-11-15 10:36:14.651421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.035 [2024-11-15 10:36:14.651645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.035 [2024-11-15 10:36:14.651813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:14.035 [2024-11-15 10:36:14.656670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.035 [2024-11-15 10:36:14.656931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.035 [2024-11-15 10:36:14.657112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:14.035 [2024-11-15 10:36:14.662027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.035 [2024-11-15 10:36:14.662321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.035 [2024-11-15 10:36:14.662557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:14.035 [2024-11-15 10:36:14.667292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.035 [2024-11-15 10:36:14.667557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.035 [2024-11-15 10:36:14.667709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:14.035 [2024-11-15 10:36:14.672607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.035 [2024-11-15 10:36:14.672826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.035 [2024-11-15 10:36:14.672849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:14.035 [2024-11-15 10:36:14.678016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.035 [2024-11-15 10:36:14.678299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.035 [2024-11-15 10:36:14.678511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:14.035 [2024-11-15 10:36:14.683438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.035 [2024-11-15 10:36:14.683664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.035 [2024-11-15 10:36:14.683888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:14.035 [2024-11-15 10:36:14.688805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.035 [2024-11-15 10:36:14.689063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.035 [2024-11-15 10:36:14.689275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:14.036 [2024-11-15 10:36:14.694162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.036 [2024-11-15 10:36:14.694431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.036 [2024-11-15 10:36:14.694637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:14.036 [2024-11-15 10:36:14.699447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.036 [2024-11-15 10:36:14.699526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.036 [2024-11-15 10:36:14.699549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:14.036 [2024-11-15 10:36:14.704477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.036 [2024-11-15 10:36:14.704567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.036 [2024-11-15 10:36:14.704588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:14.036 [2024-11-15 10:36:14.709540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.036 [2024-11-15 10:36:14.709625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.036 [2024-11-15 10:36:14.709663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:14.036 [2024-11-15 10:36:14.714659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.036 [2024-11-15 10:36:14.714919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.036 [2024-11-15 10:36:14.714941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:14.036 [2024-11-15 10:36:14.720091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.036 [2024-11-15 10:36:14.720202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.036 [2024-11-15 10:36:14.720223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:14.036 [2024-11-15 10:36:14.725126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.036 [2024-11-15 10:36:14.725239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.036 [2024-11-15 10:36:14.725276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:14.036 [2024-11-15 10:36:14.730111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.036 [2024-11-15 10:36:14.730203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.036 [2024-11-15 10:36:14.730224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:14.036 [2024-11-15 10:36:14.735261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.036 [2024-11-15 10:36:14.735331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.036 [2024-11-15 10:36:14.735379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:14.036 [2024-11-15 10:36:14.740372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.036 [2024-11-15 10:36:14.740468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.036 [2024-11-15 10:36:14.740490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:14.036 [2024-11-15 10:36:14.745600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.036 [2024-11-15 10:36:14.745837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.036 [2024-11-15 10:36:14.745859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:14.036 [2024-11-15 10:36:14.750944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.036 [2024-11-15 10:36:14.751021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.036 [2024-11-15 10:36:14.751044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:14.036 [2024-11-15 10:36:14.756385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.036 [2024-11-15 10:36:14.756459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.036 [2024-11-15 10:36:14.756481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:14.036 [2024-11-15 10:36:14.761562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.036 [2024-11-15 10:36:14.761782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.036 [2024-11-15 10:36:14.761804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:14.036 [2024-11-15 10:36:14.766908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.036 [2024-11-15 10:36:14.766983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.036 [2024-11-15 10:36:14.767005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:14.036 [2024-11-15 10:36:14.772203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.036 [2024-11-15 10:36:14.772293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.036 [2024-11-15 10:36:14.772314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:14.036 [2024-11-15 10:36:14.777448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.036 [2024-11-15 10:36:14.777542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.036 [2024-11-15 10:36:14.777563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:14.036 [2024-11-15 10:36:14.782774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.036 [2024-11-15 10:36:14.782852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.036 [2024-11-15 10:36:14.782874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:14.036 [2024-11-15 10:36:14.788211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.036 [2024-11-15 10:36:14.788300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.036 [2024-11-15 10:36:14.788322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:14.036 [2024-11-15 10:36:14.793419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.036 [2024-11-15 10:36:14.793679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.036 [2024-11-15 10:36:14.793701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:14.036 [2024-11-15 10:36:14.798733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.036 [2024-11-15 10:36:14.798820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.036 [2024-11-15 10:36:14.798844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:14.036 [2024-11-15 10:36:14.803880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.036 [2024-11-15 10:36:14.803968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.036 [2024-11-15 10:36:14.803990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:14.036 [2024-11-15 10:36:14.809093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.036 [2024-11-15 10:36:14.809177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.037 [2024-11-15 10:36:14.809198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:14.037 [2024-11-15 10:36:14.814314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.037 [2024-11-15 10:36:14.814404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.037 [2024-11-15 10:36:14.814440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:14.037 [2024-11-15 10:36:14.819650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.037 [2024-11-15 10:36:14.819727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.037 [2024-11-15 10:36:14.819749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:14.037 [2024-11-15 10:36:14.824842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.037 [2024-11-15 10:36:14.825137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.037 [2024-11-15 10:36:14.825175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:14.037 [2024-11-15 10:36:14.830276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.037 [2024-11-15 10:36:14.830361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.037 [2024-11-15 10:36:14.830382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:14.037 [2024-11-15 10:36:14.835534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.037 [2024-11-15 10:36:14.835628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.037 [2024-11-15 10:36:14.835655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:14.037 [2024-11-15 10:36:14.841016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.037 [2024-11-15 10:36:14.841258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.037 [2024-11-15 10:36:14.841280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:14.037 [2024-11-15 10:36:14.846523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.037 [2024-11-15 10:36:14.846602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.037 [2024-11-15 10:36:14.846624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:14.037 [2024-11-15 10:36:14.851805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.037 [2024-11-15 10:36:14.851942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.037 [2024-11-15 10:36:14.851964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:14.037 [2024-11-15 10:36:14.857193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.037 [2024-11-15 10:36:14.857344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.037 [2024-11-15 10:36:14.857366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:14.037 [2024-11-15 10:36:14.862526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.037 [2024-11-15 10:36:14.862631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.037 [2024-11-15 10:36:14.862653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:14.037 [2024-11-15 10:36:14.867873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.037 [2024-11-15 10:36:14.867958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.037 [2024-11-15 10:36:14.867980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:14.037 [2024-11-15 10:36:14.873269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.037 [2024-11-15 10:36:14.873364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.037 [2024-11-15 10:36:14.873386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:14.037 [2024-11-15 10:36:14.878483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.037 [2024-11-15 10:36:14.878568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.037 [2024-11-15 10:36:14.878607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:14.037 [2024-11-15 10:36:14.883777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.037 [2024-11-15 10:36:14.883876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.037 [2024-11-15 10:36:14.883898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:14.297 [2024-11-15 10:36:14.889179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.297 [2024-11-15 10:36:14.889257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.297 [2024-11-15 10:36:14.889280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:14.297 [2024-11-15 10:36:14.894216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.297 [2024-11-15 10:36:14.894322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.297 [2024-11-15 10:36:14.894343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:14.297 [2024-11-15 10:36:14.899748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.298 [2024-11-15 10:36:14.899825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.298 [2024-11-15 10:36:14.899847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:14.298 [2024-11-15 10:36:14.905177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.298 [2024-11-15 10:36:14.905266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.298 [2024-11-15 10:36:14.905287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:14.298 [2024-11-15 10:36:14.910435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.298 [2024-11-15 10:36:14.910524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.298 [2024-11-15 10:36:14.910545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:14.298 [2024-11-15 10:36:14.915555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.298 [2024-11-15 10:36:14.915632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.298 [2024-11-15 10:36:14.915655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:14.298 [2024-11-15 10:36:14.920748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.298 [2024-11-15 10:36:14.921023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.298 [2024-11-15 10:36:14.921045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:14.298 [2024-11-15 10:36:14.926287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.298 [2024-11-15 10:36:14.926393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.298 [2024-11-15 10:36:14.926415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:14.298 [2024-11-15 10:36:14.931481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.298 [2024-11-15 10:36:14.931556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.298 [2024-11-15 10:36:14.931578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:14.298 [2024-11-15 10:36:14.936677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.298 [2024-11-15 10:36:14.936938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.298 [2024-11-15 10:36:14.936960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:14.298 [2024-11-15 10:36:14.942145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.298 [2024-11-15 10:36:14.942234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.298 [2024-11-15 10:36:14.942255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:14.298 [2024-11-15 10:36:14.947213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.298 [2024-11-15 10:36:14.947322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.298 [2024-11-15 10:36:14.947344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:14.298 [2024-11-15 10:36:14.952385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.298 [2024-11-15 10:36:14.952460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.298 [2024-11-15 10:36:14.952481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:14.298 [2024-11-15 10:36:14.957472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.298 [2024-11-15 10:36:14.957557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.298 [2024-11-15 10:36:14.957578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:14.298 [2024-11-15 10:36:14.962741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.298 [2024-11-15 10:36:14.962833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.298 [2024-11-15 10:36:14.962855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:14.298 [2024-11-15 10:36:14.968099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.298 [2024-11-15 10:36:14.968204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.298 [2024-11-15 10:36:14.968226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:14.298 [2024-11-15 10:36:14.973351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.298 [2024-11-15 10:36:14.973456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.298 [2024-11-15 10:36:14.973476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:14.298 [2024-11-15 10:36:14.978574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.298 [2024-11-15 10:36:14.978681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.298 [2024-11-15 10:36:14.978702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:14.298 [2024-11-15 10:36:14.983661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.298 [2024-11-15 10:36:14.983785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.298 [2024-11-15 10:36:14.983806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:14.298 [2024-11-15 10:36:14.988851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.298 [2024-11-15 10:36:14.989162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.298 [2024-11-15 10:36:14.989185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:14.298 [2024-11-15 10:36:14.994189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.298 [2024-11-15 10:36:14.994291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.298 [2024-11-15 10:36:14.994328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:14.298 [2024-11-15 10:36:14.999832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.298 [2024-11-15 10:36:14.999925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.298 [2024-11-15 10:36:14.999948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:14.298 [2024-11-15 10:36:15.005088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.298 [2024-11-15 10:36:15.005781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.298 [2024-11-15 10:36:15.005804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:14.298 [2024-11-15 10:36:15.010930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.298 [2024-11-15 10:36:15.011020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.298 [2024-11-15 10:36:15.011058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:14.298 [2024-11-15 10:36:15.016279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.298 [2024-11-15 10:36:15.016355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.299 [2024-11-15 10:36:15.016376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:14.299 [2024-11-15 10:36:15.021583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.299 [2024-11-15 10:36:15.021667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.299 [2024-11-15 10:36:15.021688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:14.299 [2024-11-15 10:36:15.026719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.299 [2024-11-15 10:36:15.026804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.299 [2024-11-15 10:36:15.026825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:14.299 [2024-11-15 10:36:15.031768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.299 [2024-11-15 10:36:15.031882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.299 [2024-11-15 10:36:15.031904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:14.299 [2024-11-15 10:36:15.036984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.299 [2024-11-15 10:36:15.037288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.299 [2024-11-15 10:36:15.037310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:14.299 [2024-11-15 10:36:15.042410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.299 [2024-11-15 10:36:15.042500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.299 [2024-11-15 10:36:15.042521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:14.299 [2024-11-15 10:36:15.047617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.299 [2024-11-15 10:36:15.047711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.299 [2024-11-15 10:36:15.047737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:14.299 [2024-11-15 10:36:15.052789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.299 [2024-11-15 10:36:15.053039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.299 [2024-11-15 10:36:15.053061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:14.299 [2024-11-15 10:36:15.058186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.299 [2024-11-15 10:36:15.058301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.299 [2024-11-15 10:36:15.058322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:14.299 [2024-11-15 10:36:15.063596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.299 [2024-11-15 10:36:15.063707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.299 [2024-11-15 10:36:15.063728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:14.299 [2024-11-15 10:36:15.069007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.299 [2024-11-15 10:36:15.069146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.299 [2024-11-15 10:36:15.069168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:14.299 [2024-11-15 10:36:15.074320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.299 [2024-11-15 10:36:15.074403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.299 [2024-11-15 10:36:15.074425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:14.299 [2024-11-15 10:36:15.079777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.299 [2024-11-15 10:36:15.079859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.299 [2024-11-15 10:36:15.079881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:14.299 [2024-11-15 10:36:15.085169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.299 [2024-11-15 10:36:15.085249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.299 [2024-11-15 10:36:15.085271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:14.299 [2024-11-15 10:36:15.090703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.299 [2024-11-15 10:36:15.090787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.299 [2024-11-15 10:36:15.090808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:14.299 [2024-11-15 10:36:15.096108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.299 [2024-11-15 10:36:15.096201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.299 [2024-11-15 10:36:15.096222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:14.299 [2024-11-15 10:36:15.101321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.299 [2024-11-15 10:36:15.101401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.299 [2024-11-15 10:36:15.101422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:14.299 [2024-11-15 10:36:15.106506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.299 [2024-11-15 10:36:15.106592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.299 [2024-11-15 10:36:15.106613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:14.299 [2024-11-15 10:36:15.111768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.299 [2024-11-15 10:36:15.111850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.299 [2024-11-15 10:36:15.111885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:14.299 [2024-11-15 10:36:15.117182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.299 [2024-11-15 10:36:15.117266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.299 [2024-11-15 10:36:15.117288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:14.299 [2024-11-15 10:36:15.122478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.299 [2024-11-15 10:36:15.122561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.299 [2024-11-15 10:36:15.122583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:14.299 [2024-11-15 10:36:15.127797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.299 [2024-11-15 10:36:15.127893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.299 [2024-11-15 10:36:15.127930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:14.299 [2024-11-15 10:36:15.134330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.299 [2024-11-15 10:36:15.134452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.299 [2024-11-15 10:36:15.134473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:14.299 [2024-11-15 10:36:15.140504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.299 [2024-11-15 10:36:15.140587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.300 [2024-11-15 10:36:15.140608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:14.300 [2024-11-15 10:36:15.145751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.300 [2024-11-15 10:36:15.145841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.300 [2024-11-15 10:36:15.145861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:14.560 [2024-11-15 10:36:15.150887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.560 [2024-11-15 10:36:15.150984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.560 [2024-11-15 10:36:15.151020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:14.560 [2024-11-15 10:36:15.156101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.560 [2024-11-15 10:36:15.156183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.560 [2024-11-15 10:36:15.156204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:14.560 [2024-11-15 10:36:15.161216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.560 [2024-11-15 10:36:15.161302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.560 [2024-11-15 10:36:15.161323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:14.560 [2024-11-15 10:36:15.166321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.560 [2024-11-15 10:36:15.166414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.560 [2024-11-15 10:36:15.166435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:14.560 [2024-11-15 10:36:15.171579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.560 [2024-11-15 10:36:15.171663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.560 [2024-11-15 10:36:15.171685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:14.560 [2024-11-15 10:36:15.176873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.560 [2024-11-15 10:36:15.176974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.560 [2024-11-15 10:36:15.176996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:14.560 [2024-11-15 10:36:15.182394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.560 [2024-11-15 10:36:15.182490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.560 [2024-11-15 10:36:15.182511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:14.560 [2024-11-15 10:36:15.187796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.560 [2024-11-15 10:36:15.187864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.560 [2024-11-15 10:36:15.187886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:14.560 [2024-11-15 10:36:15.192941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.560 [2024-11-15 10:36:15.193011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.560 [2024-11-15 10:36:15.193033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:14.560 5770.00 IOPS, 721.25 MiB/s [2024-11-15T10:36:15.413Z] [2024-11-15 10:36:15.199136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.560 [2024-11-15 10:36:15.199215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.560 [2024-11-15 10:36:15.199238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:14.560 [2024-11-15 10:36:15.204401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.560 [2024-11-15 10:36:15.204495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.560 [2024-11-15 10:36:15.204516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:14.560 [2024-11-15 10:36:15.209717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.560 [2024-11-15 10:36:15.209802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.560 [2024-11-15 10:36:15.209823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:14.560 [2024-11-15 10:36:15.215199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.560 [2024-11-15 10:36:15.215278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.560 [2024-11-15 10:36:15.215300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:14.560 [2024-11-15 10:36:15.220589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.560 [2024-11-15 10:36:15.220686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.560 [2024-11-15 10:36:15.220708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:14.560 [2024-11-15 10:36:15.225970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.560 [2024-11-15 10:36:15.226052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.561 [2024-11-15 10:36:15.226078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:14.561 [2024-11-15 10:36:15.231125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.561 [2024-11-15 10:36:15.231205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.561 [2024-11-15 10:36:15.231227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:14.561 [2024-11-15 10:36:15.236534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.561 [2024-11-15 10:36:15.236618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.561 [2024-11-15 10:36:15.236639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:14.561 [2024-11-15 10:36:15.241808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.561 [2024-11-15 10:36:15.241909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.561 [2024-11-15 10:36:15.241931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:14.561 [2024-11-15 10:36:15.247284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.561 [2024-11-15 10:36:15.247437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.561 [2024-11-15 10:36:15.247460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:14.561 [2024-11-15 10:36:15.252638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.561 [2024-11-15 10:36:15.252731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.561 [2024-11-15 10:36:15.252752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:14.561 [2024-11-15 10:36:15.257942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.561 [2024-11-15 10:36:15.258036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.561 [2024-11-15 10:36:15.258058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:14.561 [2024-11-15 10:36:15.263009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.561 [2024-11-15 10:36:15.263163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.561 [2024-11-15 10:36:15.263185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:14.561 [2024-11-15 10:36:15.268137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.561 [2024-11-15 10:36:15.268234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.561 [2024-11-15 10:36:15.268255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:14.561 [2024-11-15 10:36:15.273549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.561 [2024-11-15 10:36:15.273660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.561 [2024-11-15 10:36:15.273682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:14.561 [2024-11-15 10:36:15.278908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.561 [2024-11-15 10:36:15.278978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.561 [2024-11-15 10:36:15.279000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:14.561 [2024-11-15 10:36:15.284151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.561 [2024-11-15 10:36:15.284251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.561 [2024-11-15 10:36:15.284273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:14.561 [2024-11-15 10:36:15.289457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.561 [2024-11-15 10:36:15.289537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.561 [2024-11-15 10:36:15.289559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:14.561 [2024-11-15 10:36:15.294761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.561 [2024-11-15 10:36:15.294831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.561 [2024-11-15 10:36:15.294853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:14.561 [2024-11-15 10:36:15.299974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.561 [2024-11-15 10:36:15.300076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.561 [2024-11-15 10:36:15.300098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:14.561 [2024-11-15 10:36:15.305333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.561 [2024-11-15 10:36:15.305443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.561 [2024-11-15 10:36:15.305464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:14.561 [2024-11-15 10:36:15.310665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.561 [2024-11-15 10:36:15.310786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.561 [2024-11-15 10:36:15.310808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:14.561 [2024-11-15 10:36:15.316001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.561 [2024-11-15 10:36:15.316093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.561 [2024-11-15 10:36:15.316128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:14.561 [2024-11-15 10:36:15.321197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.561 [2024-11-15 10:36:15.321297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.561 [2024-11-15 10:36:15.321320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:14.561 [2024-11-15 10:36:15.326370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.561 [2024-11-15 10:36:15.326449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.561 [2024-11-15 10:36:15.326471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:14.561 [2024-11-15 10:36:15.331530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.561 [2024-11-15 10:36:15.331612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.561 [2024-11-15 10:36:15.331634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:14.561 [2024-11-15 10:36:15.336766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.561 [2024-11-15 10:36:15.336858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.561 [2024-11-15 10:36:15.336880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:14.561 [2024-11-15 10:36:15.342096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.561 [2024-11-15 10:36:15.342179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.561 [2024-11-15 10:36:15.342202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:14.561 [2024-11-15 10:36:15.347336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.561 [2024-11-15 10:36:15.347468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.561 [2024-11-15 10:36:15.347490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:14.561 [2024-11-15 10:36:15.352681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.562 [2024-11-15 10:36:15.352769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.562 [2024-11-15 10:36:15.352792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:14.562 [2024-11-15 10:36:15.357916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.562 [2024-11-15 10:36:15.357986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.562 [2024-11-15 10:36:15.358009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:14.562 [2024-11-15 10:36:15.363334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.562 [2024-11-15 10:36:15.363441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.562 [2024-11-15 10:36:15.363462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:14.562 [2024-11-15 10:36:15.368622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.562 [2024-11-15 10:36:15.368724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.562 [2024-11-15 10:36:15.368745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:14.562 [2024-11-15 10:36:15.373912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.562 [2024-11-15 10:36:15.374009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.562 [2024-11-15 10:36:15.374032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:14.562 [2024-11-15 10:36:15.379213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.562 [2024-11-15 10:36:15.379338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.562 [2024-11-15 10:36:15.379388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:14.562 [2024-11-15 10:36:15.384420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.562 [2024-11-15 10:36:15.384524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.562 [2024-11-15 10:36:15.384545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:14.562 [2024-11-15 10:36:15.389701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.562 [2024-11-15 10:36:15.389791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.562 [2024-11-15 10:36:15.389812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:14.562 [2024-11-15 10:36:15.394940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.562 [2024-11-15 10:36:15.395051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.562 [2024-11-15 10:36:15.395073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:14.562 [2024-11-15 10:36:15.400243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.562 [2024-11-15 10:36:15.400344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.562 [2024-11-15 10:36:15.400365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:14.562 [2024-11-15 10:36:15.405354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.562 [2024-11-15 10:36:15.405441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.562 [2024-11-15 10:36:15.405462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:14.822 [2024-11-15 10:36:15.410711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.822 [2024-11-15 10:36:15.410808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.822 [2024-11-15 10:36:15.410830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:14.822 [2024-11-15 10:36:15.415964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.822 [2024-11-15 10:36:15.416059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.822 [2024-11-15 10:36:15.416083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:14.822 [2024-11-15 10:36:15.421183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.822 [2024-11-15 10:36:15.421294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.822 [2024-11-15 10:36:15.421316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:14.822 [2024-11-15 10:36:15.426355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.822 [2024-11-15 10:36:15.426424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.822 [2024-11-15 10:36:15.426445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:14.822 [2024-11-15 10:36:15.431593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.822 [2024-11-15 10:36:15.431680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.822 [2024-11-15 10:36:15.431703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:14.822 [2024-11-15 10:36:15.436735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.822 [2024-11-15 10:36:15.436830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.822 [2024-11-15 10:36:15.436851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:14.822 [2024-11-15 10:36:15.441935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.822 [2024-11-15 10:36:15.442018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.823 [2024-11-15 10:36:15.442041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:14.823 [2024-11-15 10:36:15.447087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.823 [2024-11-15 10:36:15.447173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.823 [2024-11-15 10:36:15.447210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:14.823 [2024-11-15 10:36:15.452331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.823 [2024-11-15 10:36:15.452424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.823 [2024-11-15 10:36:15.452445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:14.823 [2024-11-15 10:36:15.457493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.823 [2024-11-15 10:36:15.457597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.823 [2024-11-15 10:36:15.457618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:14.823 [2024-11-15 10:36:15.462681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.823 [2024-11-15 10:36:15.462786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.823 [2024-11-15 10:36:15.462806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:14.823 [2024-11-15 10:36:15.467800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.823 [2024-11-15 10:36:15.467910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.823 [2024-11-15 10:36:15.467931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:14.823 [2024-11-15 10:36:15.472942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.823 [2024-11-15 10:36:15.473036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.823 [2024-11-15 10:36:15.473057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:14.823 [2024-11-15 10:36:15.478028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.823 [2024-11-15 10:36:15.478170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.823 [2024-11-15 10:36:15.478191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:14.823 [2024-11-15 10:36:15.483230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.823 [2024-11-15 10:36:15.483346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.823 [2024-11-15 10:36:15.483404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:14.823 [2024-11-15 10:36:15.488585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.823 [2024-11-15 10:36:15.488678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.823 [2024-11-15 10:36:15.488698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:14.823 [2024-11-15 10:36:15.493832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.823 [2024-11-15 10:36:15.493912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.823 [2024-11-15 10:36:15.493933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:14.823 [2024-11-15 10:36:15.498921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.823 [2024-11-15 10:36:15.498987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.823 [2024-11-15 10:36:15.499009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:14.823 [2024-11-15 10:36:15.504096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.823 [2024-11-15 10:36:15.504183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.823 [2024-11-15 10:36:15.504204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:14.823 [2024-11-15 10:36:15.509323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.823 [2024-11-15 10:36:15.509418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.823 [2024-11-15 10:36:15.509440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:14.823 [2024-11-15 10:36:15.514527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.823 [2024-11-15 10:36:15.514626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.823 [2024-11-15 10:36:15.514652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:14.823 [2024-11-15 10:36:15.519614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.823 [2024-11-15 10:36:15.519695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.823 [2024-11-15 10:36:15.519717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:14.823 [2024-11-15 10:36:15.525000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.823 [2024-11-15 10:36:15.525122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.823 [2024-11-15 10:36:15.525145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:14.823 [2024-11-15 10:36:15.530275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.823 [2024-11-15 10:36:15.530372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.823 [2024-11-15 10:36:15.530394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:14.823 [2024-11-15 10:36:15.535527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.824 [2024-11-15 10:36:15.535607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.824 [2024-11-15 10:36:15.535629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:14.824 [2024-11-15 10:36:15.540597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.824 [2024-11-15 10:36:15.540691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.824 [2024-11-15 10:36:15.540713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:14.824 [2024-11-15 10:36:15.545856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.824 [2024-11-15 10:36:15.545936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.824 [2024-11-15 10:36:15.545958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:14.824 [2024-11-15 10:36:15.551233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.824 [2024-11-15 10:36:15.551340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.824 [2024-11-15 10:36:15.551389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:14.824 [2024-11-15 10:36:15.556524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.824 [2024-11-15 10:36:15.556611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.824 [2024-11-15 10:36:15.556632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:14.824 [2024-11-15 10:36:15.561947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.824 [2024-11-15 10:36:15.562025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.824 [2024-11-15 10:36:15.562062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:14.824 [2024-11-15 10:36:15.567313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.824 [2024-11-15 10:36:15.567450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.824 [2024-11-15 10:36:15.567472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:14.824 [2024-11-15 10:36:15.572623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.824 [2024-11-15 10:36:15.572733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.824 [2024-11-15 10:36:15.572755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:14.824 [2024-11-15 10:36:15.578014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.824 [2024-11-15 10:36:15.578155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.824 [2024-11-15 10:36:15.578176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:14.824 [2024-11-15 10:36:15.583288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.824 [2024-11-15 10:36:15.583411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.824 [2024-11-15 10:36:15.583433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:14.824 [2024-11-15 10:36:15.588524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.824 [2024-11-15 10:36:15.588618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.824 [2024-11-15 10:36:15.588639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:14.824 [2024-11-15 10:36:15.593733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.824 [2024-11-15 10:36:15.593817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.824 [2024-11-15 10:36:15.593838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:14.824 [2024-11-15 10:36:15.598947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.824 [2024-11-15 10:36:15.599040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.824 [2024-11-15 10:36:15.599060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:14.824 [2024-11-15 10:36:15.604203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.824 [2024-11-15 10:36:15.604330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.824 [2024-11-15 10:36:15.604352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:14.824 [2024-11-15 10:36:15.609351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.824 [2024-11-15 10:36:15.609434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.824 [2024-11-15 10:36:15.609456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:14.824 [2024-11-15 10:36:15.614397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.824 [2024-11-15 10:36:15.614474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.824 [2024-11-15 10:36:15.614495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:14.824 [2024-11-15 10:36:15.619530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.824 [2024-11-15 10:36:15.619617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.824 [2024-11-15 10:36:15.619639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:14.824 [2024-11-15 10:36:15.624807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.824 [2024-11-15 10:36:15.624876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.824 [2024-11-15 10:36:15.624898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:14.825 [2024-11-15 10:36:15.630127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.825 [2024-11-15 10:36:15.630219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.825 [2024-11-15 10:36:15.630240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:14.825 [2024-11-15 10:36:15.635318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.825 [2024-11-15 10:36:15.635463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.825 [2024-11-15 10:36:15.635485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:14.825 [2024-11-15 10:36:15.640429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.825 [2024-11-15 10:36:15.640523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.825 [2024-11-15 10:36:15.640545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:14.825 [2024-11-15 10:36:15.645581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.825 [2024-11-15 10:36:15.645690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.825 [2024-11-15 10:36:15.645711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:14.825 [2024-11-15 10:36:15.650934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.825 [2024-11-15 10:36:15.651061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.825 [2024-11-15 10:36:15.651083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:14.825 [2024-11-15 10:36:15.656214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.825 [2024-11-15 10:36:15.656326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.825 [2024-11-15 10:36:15.656346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:14.825 [2024-11-15 10:36:15.662323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.825 [2024-11-15 10:36:15.662428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.825 [2024-11-15 10:36:15.662448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:14.825 [2024-11-15 10:36:15.668585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:14.825 [2024-11-15 10:36:15.668679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.825 [2024-11-15 10:36:15.668699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:15.086 [2024-11-15 10:36:15.673838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.086 [2024-11-15 10:36:15.673928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.087 [2024-11-15 10:36:15.673949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:15.087 [2024-11-15 10:36:15.679038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.087 [2024-11-15 10:36:15.679146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.087 [2024-11-15 10:36:15.679168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:15.087 [2024-11-15 10:36:15.684243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.087 [2024-11-15 10:36:15.684325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.087 [2024-11-15 10:36:15.684347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:15.087 [2024-11-15 10:36:15.689527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.087 [2024-11-15 10:36:15.689621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.087 [2024-11-15 10:36:15.689658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:15.087 [2024-11-15 10:36:15.694782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.087 [2024-11-15 10:36:15.694853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.087 [2024-11-15 10:36:15.694875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:15.087 [2024-11-15 10:36:15.699929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.087 [2024-11-15 10:36:15.700008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.087 [2024-11-15 10:36:15.700029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:15.087 [2024-11-15 10:36:15.705191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.087 [2024-11-15 10:36:15.705275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.087 [2024-11-15 10:36:15.705297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:15.087 [2024-11-15 10:36:15.710462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.087 [2024-11-15 10:36:15.710566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.087 [2024-11-15 10:36:15.710587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:15.087 [2024-11-15 10:36:15.715859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.087 [2024-11-15 10:36:15.715937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.087 [2024-11-15 10:36:15.715958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:15.087 [2024-11-15 10:36:15.721137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.087 [2024-11-15 10:36:15.721219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.087 [2024-11-15 10:36:15.721241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:15.087 [2024-11-15 10:36:15.726392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.087 [2024-11-15 10:36:15.726459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.087 [2024-11-15 10:36:15.726480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:15.087 [2024-11-15 10:36:15.731736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.087 [2024-11-15 10:36:15.731814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.087 [2024-11-15 10:36:15.731836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:15.087 [2024-11-15 10:36:15.736925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.087 [2024-11-15 10:36:15.736994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.087 [2024-11-15 10:36:15.737017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:15.087 [2024-11-15 10:36:15.742110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.087 [2024-11-15 10:36:15.742175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.087 [2024-11-15 10:36:15.742195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:15.087 [2024-11-15 10:36:15.747154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.087 [2024-11-15 10:36:15.747219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.087 [2024-11-15 10:36:15.747240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:15.087 [2024-11-15 10:36:15.752254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.087 [2024-11-15 10:36:15.752319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.087 [2024-11-15 10:36:15.752340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:15.087 [2024-11-15 10:36:15.757352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.088 [2024-11-15 10:36:15.757430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.088 [2024-11-15 10:36:15.757451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:15.088 [2024-11-15 10:36:15.762507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.088 [2024-11-15 10:36:15.762584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.088 [2024-11-15 10:36:15.762604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:15.088 [2024-11-15 10:36:15.767713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.088 [2024-11-15 10:36:15.767792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.088 [2024-11-15 10:36:15.767830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:15.088 [2024-11-15 10:36:15.772963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.088 [2024-11-15 10:36:15.773044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.088 [2024-11-15 10:36:15.773080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:15.088 [2024-11-15 10:36:15.778146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.088 [2024-11-15 10:36:15.778228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.088 [2024-11-15 10:36:15.778250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:15.088 [2024-11-15 10:36:15.783287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.088 [2024-11-15 10:36:15.783378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.088 [2024-11-15 10:36:15.783400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:15.088 [2024-11-15 10:36:15.788458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.088 [2024-11-15 10:36:15.788541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.088 [2024-11-15 10:36:15.788572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:15.088 [2024-11-15 10:36:15.793652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.088 [2024-11-15 10:36:15.793735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.088 [2024-11-15 10:36:15.793757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:15.088 [2024-11-15 10:36:15.798816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.088 [2024-11-15 10:36:15.798897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.088 [2024-11-15 10:36:15.798919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:15.088 [2024-11-15 10:36:15.804099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.088 [2024-11-15 10:36:15.804176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.088 [2024-11-15 10:36:15.804197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:15.088 [2024-11-15 10:36:15.809236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.088 [2024-11-15 10:36:15.809318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.088 [2024-11-15 10:36:15.809340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:15.088 [2024-11-15 10:36:15.814362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.088 [2024-11-15 10:36:15.814442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.088 [2024-11-15 10:36:15.814464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:15.088 [2024-11-15 10:36:15.819507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.088 [2024-11-15 10:36:15.819578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.088 [2024-11-15 10:36:15.819600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:15.088 [2024-11-15 10:36:15.824751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.088 [2024-11-15 10:36:15.824822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.088 [2024-11-15 10:36:15.824843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:15.088 [2024-11-15 10:36:15.829976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.088 [2024-11-15 10:36:15.830056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.088 [2024-11-15 10:36:15.830090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:15.088 [2024-11-15 10:36:15.835183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.088 [2024-11-15 10:36:15.835265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.088 [2024-11-15 10:36:15.835287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:15.088 [2024-11-15 10:36:15.840355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.088 [2024-11-15 10:36:15.840436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.088 [2024-11-15 10:36:15.840458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:15.088 [2024-11-15 10:36:15.845540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.088 [2024-11-15 10:36:15.845618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.088 [2024-11-15 10:36:15.845639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:15.088 [2024-11-15 10:36:15.850800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.088 [2024-11-15 10:36:15.850880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.088 [2024-11-15 10:36:15.850902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:15.088 [2024-11-15 10:36:15.856119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.089 [2024-11-15 10:36:15.856201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.089 [2024-11-15 10:36:15.856224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:15.089 [2024-11-15 10:36:15.861426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.089 [2024-11-15 10:36:15.861505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.089 [2024-11-15 10:36:15.861526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:15.089 [2024-11-15 10:36:15.866715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.089 [2024-11-15 10:36:15.866792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.089 [2024-11-15 10:36:15.866813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:15.089 [2024-11-15 10:36:15.871809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.089 [2024-11-15 10:36:15.871889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.089 [2024-11-15 10:36:15.871910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:15.089 [2024-11-15 10:36:15.876863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.089 [2024-11-15 10:36:15.876935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.089 [2024-11-15 10:36:15.876955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:15.089 [2024-11-15 10:36:15.882028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.089 [2024-11-15 10:36:15.882122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.089 [2024-11-15 10:36:15.882143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:15.089 [2024-11-15 10:36:15.887067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.089 [2024-11-15 10:36:15.887143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.089 [2024-11-15 10:36:15.887163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:15.089 [2024-11-15 10:36:15.892348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.089 [2024-11-15 10:36:15.892428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.089 [2024-11-15 10:36:15.892451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:15.089 [2024-11-15 10:36:15.897638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.089 [2024-11-15 10:36:15.897760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.089 [2024-11-15 10:36:15.897781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:15.089 [2024-11-15 10:36:15.903040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.089 [2024-11-15 10:36:15.903129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.089 [2024-11-15 10:36:15.903152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:15.089 [2024-11-15 10:36:15.908352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.089 [2024-11-15 10:36:15.908445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.089 [2024-11-15 10:36:15.908468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:15.089 [2024-11-15 10:36:15.913584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.089 [2024-11-15 10:36:15.913678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.089 [2024-11-15 10:36:15.913700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:15.089 [2024-11-15 10:36:15.918890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.089 [2024-11-15 10:36:15.918971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.089 [2024-11-15 10:36:15.918994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:15.089 [2024-11-15 10:36:15.924229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.089 [2024-11-15 10:36:15.924312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.089 [2024-11-15 10:36:15.924334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:15.089 [2024-11-15 10:36:15.929560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.089 [2024-11-15 10:36:15.929670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.089 [2024-11-15 10:36:15.929691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:15.089 [2024-11-15 10:36:15.934846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.089 [2024-11-15 10:36:15.934931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.089 [2024-11-15 10:36:15.934953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:15.349 [2024-11-15 10:36:15.940026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.349 [2024-11-15 10:36:15.940142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.349 [2024-11-15 10:36:15.940164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:15.350 [2024-11-15 10:36:15.945309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.350 [2024-11-15 10:36:15.945406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.350 [2024-11-15 10:36:15.945428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:15.350 [2024-11-15 10:36:15.950692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.350 [2024-11-15 10:36:15.950787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.350 [2024-11-15 10:36:15.950809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:15.350 [2024-11-15 10:36:15.956041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.350 [2024-11-15 10:36:15.956177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.350 [2024-11-15 10:36:15.956198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:15.350 [2024-11-15 10:36:15.961240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.350 [2024-11-15 10:36:15.961320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.350 [2024-11-15 10:36:15.961342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:15.350 [2024-11-15 10:36:15.966268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.350 [2024-11-15 10:36:15.966349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.350 [2024-11-15 10:36:15.966377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:15.350 [2024-11-15 10:36:15.971245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.350 [2024-11-15 10:36:15.971313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.350 [2024-11-15 10:36:15.971333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:15.350 [2024-11-15 10:36:15.976323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.350 [2024-11-15 10:36:15.976416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.350 [2024-11-15 10:36:15.976436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:15.350 [2024-11-15 10:36:15.981622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.350 [2024-11-15 10:36:15.981704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.350 [2024-11-15 10:36:15.981740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:15.350 [2024-11-15 10:36:15.986667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.350 [2024-11-15 10:36:15.986766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.350 [2024-11-15 10:36:15.986786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:15.350 [2024-11-15 10:36:15.991783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.350 [2024-11-15 10:36:15.991868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.350 [2024-11-15 10:36:15.991889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:15.350 [2024-11-15 10:36:15.996829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.350 [2024-11-15 10:36:15.996922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.350 [2024-11-15 10:36:15.996942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:15.350 [2024-11-15 10:36:16.002193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.350 [2024-11-15 10:36:16.002261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.350 [2024-11-15 10:36:16.002284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:15.350 [2024-11-15 10:36:16.007306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.350 [2024-11-15 10:36:16.007398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.350 [2024-11-15 10:36:16.007420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:15.350 [2024-11-15 10:36:16.012375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.350 [2024-11-15 10:36:16.012469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.350 [2024-11-15 10:36:16.012489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:15.350 [2024-11-15 10:36:16.017402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.350 [2024-11-15 10:36:16.017487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.350 [2024-11-15 10:36:16.017508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:15.350 [2024-11-15 10:36:16.022360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.350 [2024-11-15 10:36:16.022454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.350 [2024-11-15 10:36:16.022475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:15.350 [2024-11-15 10:36:16.027689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.350 [2024-11-15 10:36:16.027769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.350 [2024-11-15 10:36:16.027800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:15.350 [2024-11-15 10:36:16.032746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.350 [2024-11-15 10:36:16.032827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.350 [2024-11-15 10:36:16.032848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:15.350 [2024-11-15 10:36:16.038062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.350 [2024-11-15 10:36:16.038170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.350 [2024-11-15 10:36:16.038190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:15.350 [2024-11-15 10:36:16.043290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.351 [2024-11-15 10:36:16.043380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.351 [2024-11-15 10:36:16.043403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:15.351 [2024-11-15 10:36:16.048478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.351 [2024-11-15 10:36:16.048562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.351 [2024-11-15 10:36:16.048583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:15.351 [2024-11-15 10:36:16.053563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.351 [2024-11-15 10:36:16.053657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.351 [2024-11-15 10:36:16.053677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:15.351 [2024-11-15 10:36:16.058532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.351 [2024-11-15 10:36:16.058616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.351 [2024-11-15 10:36:16.058637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:15.351 [2024-11-15 10:36:16.063526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.351 [2024-11-15 10:36:16.063608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.351 [2024-11-15 10:36:16.063630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:15.351 [2024-11-15 10:36:16.068599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.351 [2024-11-15 10:36:16.068684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.351 [2024-11-15 10:36:16.068705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:15.351 [2024-11-15 10:36:16.073658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.351 [2024-11-15 10:36:16.073743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.351 [2024-11-15 10:36:16.073763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:15.351 [2024-11-15 10:36:16.078716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.351 [2024-11-15 10:36:16.078811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.351 [2024-11-15 10:36:16.078831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:15.351 [2024-11-15 10:36:16.083937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.351 [2024-11-15 10:36:16.084038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.351 [2024-11-15 10:36:16.084076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:15.351 [2024-11-15 10:36:16.089275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.351 [2024-11-15 10:36:16.089364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.351 [2024-11-15 10:36:16.089386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:15.351 [2024-11-15 10:36:16.094426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.351 [2024-11-15 10:36:16.094522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.351 [2024-11-15 10:36:16.094544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:15.351 [2024-11-15 10:36:16.099735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.351 [2024-11-15 10:36:16.099838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.351 [2024-11-15 10:36:16.099860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:15.351 [2024-11-15 10:36:16.105040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.351 [2024-11-15 10:36:16.105137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.351 [2024-11-15 10:36:16.105158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:15.351 [2024-11-15 10:36:16.110260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.351 [2024-11-15 10:36:16.110365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.351 [2024-11-15 10:36:16.110388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:15.351 [2024-11-15 10:36:16.115515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.351 [2024-11-15 10:36:16.115596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.351 [2024-11-15 10:36:16.115619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:15.351 [2024-11-15 10:36:16.120875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.351 [2024-11-15 10:36:16.120974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.351 [2024-11-15 10:36:16.120995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:15.351 [2024-11-15 10:36:16.126243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.351 [2024-11-15 10:36:16.126325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.351 [2024-11-15 10:36:16.126346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:15.351 [2024-11-15 10:36:16.131537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.351 [2024-11-15 10:36:16.131618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.351 [2024-11-15 10:36:16.131641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:15.351 [2024-11-15 10:36:16.136858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.351 [2024-11-15 10:36:16.136958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.351 [2024-11-15 10:36:16.136979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:15.351 [2024-11-15 10:36:16.142145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.351 [2024-11-15 10:36:16.142240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.351 [2024-11-15 10:36:16.142261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:15.351 [2024-11-15 10:36:16.147481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.351 [2024-11-15 10:36:16.147553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.351 [2024-11-15 10:36:16.147575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:15.351 [2024-11-15 10:36:16.152781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.351 [2024-11-15 10:36:16.152866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.352 [2024-11-15 10:36:16.152887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:15.352 [2024-11-15 10:36:16.158062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.352 [2024-11-15 10:36:16.158172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.352 [2024-11-15 10:36:16.158194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:15.352 [2024-11-15 10:36:16.163120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.352 [2024-11-15 10:36:16.163202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.352 [2024-11-15 10:36:16.163223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:15.352 [2024-11-15 10:36:16.168207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.352 [2024-11-15 10:36:16.168289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.352 [2024-11-15 10:36:16.168312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:15.352 [2024-11-15 10:36:16.173227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.352 [2024-11-15 10:36:16.173299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.352 [2024-11-15 10:36:16.173320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:15.352 [2024-11-15 10:36:16.178254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.352 [2024-11-15 10:36:16.178355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.352 [2024-11-15 10:36:16.178376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:15.352 [2024-11-15 10:36:16.183292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.352 [2024-11-15 10:36:16.183421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.352 [2024-11-15 10:36:16.183443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:15.352 [2024-11-15 10:36:16.189493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.352 [2024-11-15 10:36:16.189589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.352 [2024-11-15 10:36:16.189610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:15.352 5838.50 IOPS, 729.81 MiB/s [2024-11-15T10:36:16.205Z] [2024-11-15 10:36:16.196184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x53f750) with pdu=0x200016eff3c8 00:18:15.352 [2024-11-15 10:36:16.196284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.352 [2024-11-15 10:36:16.196306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:15.611 00:18:15.611 Latency(us) 00:18:15.611 [2024-11-15T10:36:16.464Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:15.611 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:18:15.611 nvme0n1 : 2.00 5837.47 729.68 0.00 0.00 2734.72 1832.03 12988.04 00:18:15.611 [2024-11-15T10:36:16.464Z] =================================================================================================================== 00:18:15.611 [2024-11-15T10:36:16.464Z] Total : 5837.47 729.68 0.00 0.00 2734.72 1832.03 12988.04 00:18:15.611 { 00:18:15.611 "results": [ 00:18:15.611 { 00:18:15.611 "job": "nvme0n1", 00:18:15.611 "core_mask": "0x2", 00:18:15.611 "workload": "randwrite", 00:18:15.611 "status": "finished", 00:18:15.611 "queue_depth": 16, 00:18:15.611 "io_size": 131072, 00:18:15.611 "runtime": 2.004292, 00:18:15.611 "iops": 5837.472783406809, 00:18:15.611 "mibps": 729.6840979258511, 00:18:15.611 "io_failed": 0, 00:18:15.611 "io_timeout": 0, 00:18:15.611 "avg_latency_us": 2734.724475524475, 00:18:15.611 "min_latency_us": 1832.0290909090909, 00:18:15.611 "max_latency_us": 12988.043636363636 00:18:15.611 } 00:18:15.611 ], 00:18:15.611 "core_count": 1 00:18:15.611 } 00:18:15.611 10:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:15.611 10:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:15.611 10:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:15.611 | .driver_specific 00:18:15.611 | .nvme_error 00:18:15.611 | .status_code 00:18:15.611 | .command_transient_transport_error' 00:18:15.611 10:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:15.870 10:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 378 > 0 )) 00:18:15.870 10:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80619 00:18:15.870 10:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 80619 ']' 00:18:15.870 10:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 80619 00:18:15.870 10:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:18:15.870 10:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:15.870 10:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80619 00:18:15.871 10:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:15.871 10:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:15.871 killing process with pid 80619 00:18:15.871 10:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80619' 00:18:15.871 Received shutdown signal, test time was about 2.000000 seconds 00:18:15.871 00:18:15.871 Latency(us) 00:18:15.871 [2024-11-15T10:36:16.724Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:15.871 [2024-11-15T10:36:16.724Z] =================================================================================================================== 00:18:15.871 [2024-11-15T10:36:16.724Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:15.871 10:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 80619 00:18:15.871 10:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 80619 00:18:16.130 10:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 80421 00:18:16.130 10:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 80421 ']' 00:18:16.130 10:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 80421 00:18:16.130 10:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:18:16.130 10:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:16.130 10:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80421 00:18:16.130 10:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:16.130 killing process with pid 80421 00:18:16.130 10:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:16.130 10:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80421' 00:18:16.130 10:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 80421 00:18:16.130 10:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 80421 00:18:16.390 00:18:16.390 real 0m17.139s 00:18:16.390 user 0m32.878s 00:18:16.390 sys 0m4.893s 00:18:16.390 10:36:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:16.390 10:36:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:16.390 ************************************ 00:18:16.390 END TEST nvmf_digest_error 00:18:16.390 ************************************ 00:18:16.390 10:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:18:16.390 10:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:18:16.390 10:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:16.390 10:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:18:16.390 10:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:16.390 10:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:18:16.390 10:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:16.390 10:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:16.390 rmmod nvme_tcp 00:18:16.390 rmmod nvme_fabrics 00:18:16.390 rmmod nvme_keyring 00:18:16.390 10:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:16.390 10:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:18:16.390 10:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:18:16.390 10:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 80421 ']' 00:18:16.390 10:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 80421 00:18:16.390 10:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@952 -- # '[' -z 80421 ']' 00:18:16.390 10:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@956 -- # kill -0 80421 00:18:16.390 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (80421) - No such process 00:18:16.390 Process with pid 80421 is not found 00:18:16.390 10:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@979 -- # echo 'Process with pid 80421 is not found' 00:18:16.390 10:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:16.390 10:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:16.390 10:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:16.390 10:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:18:16.390 10:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:16.390 10:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:18:16.390 10:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:18:16.390 10:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:16.390 10:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:16.390 10:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:16.390 10:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:16.390 10:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:16.649 10:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:16.649 10:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:16.649 10:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:16.649 10:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:16.649 10:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:16.649 10:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:16.649 10:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:16.650 10:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:16.650 10:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:16.650 10:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:16.650 10:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:16.650 10:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:16.650 10:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:16.650 10:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:16.650 10:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:18:16.650 00:18:16.650 real 0m37.182s 00:18:16.650 user 1m10.494s 00:18:16.650 sys 0m10.019s 00:18:16.650 10:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:16.650 10:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:18:16.650 ************************************ 00:18:16.650 END TEST nvmf_digest 00:18:16.650 ************************************ 00:18:16.910 10:36:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:18:16.910 10:36:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:18:16.910 10:36:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:18:16.910 10:36:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:16.910 10:36:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:16.910 10:36:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:16.910 ************************************ 00:18:16.910 START TEST nvmf_host_multipath 00:18:16.910 ************************************ 00:18:16.910 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:18:16.910 * Looking for test storage... 00:18:16.910 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:16.910 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:16.910 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:18:16.910 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:16.910 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:16.910 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:16.910 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:16.910 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:16.910 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:18:16.910 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:18:16.910 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:18:16.910 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:18:16.910 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:18:16.910 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:18:16.910 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:18:16.910 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:16.910 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:18:16.910 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:18:16.910 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:16.910 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:16.910 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:18:16.910 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:18:16.910 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:16.910 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:18:16.910 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:18:16.910 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:18:16.910 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:18:16.910 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:16.910 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:18:16.910 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:18:16.910 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:16.910 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:16.910 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:18:16.910 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:16.910 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:16.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.910 --rc genhtml_branch_coverage=1 00:18:16.910 --rc genhtml_function_coverage=1 00:18:16.910 --rc genhtml_legend=1 00:18:16.910 --rc geninfo_all_blocks=1 00:18:16.910 --rc geninfo_unexecuted_blocks=1 00:18:16.910 00:18:16.910 ' 00:18:16.910 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:16.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.910 --rc genhtml_branch_coverage=1 00:18:16.910 --rc genhtml_function_coverage=1 00:18:16.910 --rc genhtml_legend=1 00:18:16.910 --rc geninfo_all_blocks=1 00:18:16.910 --rc geninfo_unexecuted_blocks=1 00:18:16.910 00:18:16.910 ' 00:18:16.910 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:16.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.910 --rc genhtml_branch_coverage=1 00:18:16.910 --rc genhtml_function_coverage=1 00:18:16.910 --rc genhtml_legend=1 00:18:16.910 --rc geninfo_all_blocks=1 00:18:16.910 --rc geninfo_unexecuted_blocks=1 00:18:16.910 00:18:16.910 ' 00:18:16.910 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:16.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.911 --rc genhtml_branch_coverage=1 00:18:16.911 --rc genhtml_function_coverage=1 00:18:16.911 --rc genhtml_legend=1 00:18:16.911 --rc geninfo_all_blocks=1 00:18:16.911 --rc geninfo_unexecuted_blocks=1 00:18:16.911 00:18:16.911 ' 00:18:16.911 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:16.911 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:18:16.911 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:16.911 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:16.911 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:16.911 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:16.911 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:16.911 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:16.911 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:16.911 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:16.911 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:16.911 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:16.911 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:18:16.911 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:18:16.911 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:16.911 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:16.911 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:16.911 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:16.911 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:16.911 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:18:16.911 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:16.911 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:16.911 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:16.911 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.911 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.911 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.911 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:18:16.911 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.911 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:18:16.911 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:16.911 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:16.911 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:16.911 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:16.911 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:16.911 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:16.911 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:16.911 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:16.911 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:16.911 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:16.911 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:16.911 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:16.911 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:16.911 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:16.911 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:16.911 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:18:16.911 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:18:16.911 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:16.911 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:16.911 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:16.911 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:16.911 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:16.911 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:16.911 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:16.911 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:16.911 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:16.911 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:16.911 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:16.911 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:16.911 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:16.911 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:16.911 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:16.911 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:16.911 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:16.911 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:16.911 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:16.912 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:16.912 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:16.912 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:16.912 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:16.912 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:16.912 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:16.912 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:16.912 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:16.912 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:16.912 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:16.912 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:16.912 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:17.171 Cannot find device "nvmf_init_br" 00:18:17.171 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:18:17.171 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:17.171 Cannot find device "nvmf_init_br2" 00:18:17.171 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:18:17.171 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:17.171 Cannot find device "nvmf_tgt_br" 00:18:17.171 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:18:17.171 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:17.171 Cannot find device "nvmf_tgt_br2" 00:18:17.171 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:18:17.171 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:17.171 Cannot find device "nvmf_init_br" 00:18:17.171 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:18:17.171 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:17.171 Cannot find device "nvmf_init_br2" 00:18:17.171 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:18:17.171 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:17.171 Cannot find device "nvmf_tgt_br" 00:18:17.171 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:18:17.171 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:17.171 Cannot find device "nvmf_tgt_br2" 00:18:17.171 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:18:17.171 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:17.171 Cannot find device "nvmf_br" 00:18:17.171 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:18:17.171 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:17.171 Cannot find device "nvmf_init_if" 00:18:17.171 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:18:17.171 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:17.171 Cannot find device "nvmf_init_if2" 00:18:17.172 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:18:17.172 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:17.172 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:17.172 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:18:17.172 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:17.172 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:17.172 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:18:17.172 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:17.172 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:17.172 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:17.172 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:17.172 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:17.172 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:17.172 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:17.172 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:17.172 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:17.172 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:17.172 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:17.172 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:17.172 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:17.172 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:17.172 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:17.172 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:17.172 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:17.432 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:17.432 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:17.432 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:17.432 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:17.432 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:17.432 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:17.432 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:17.432 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:17.432 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:17.432 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:17.432 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:17.432 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:17.432 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:17.432 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:17.432 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:17.432 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:17.432 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:17.432 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:18:17.432 00:18:17.432 --- 10.0.0.3 ping statistics --- 00:18:17.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:17.432 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:18:17.432 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:17.432 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:17.432 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.038 ms 00:18:17.432 00:18:17.432 --- 10.0.0.4 ping statistics --- 00:18:17.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:17.432 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:18:17.432 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:17.432 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:17.432 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:18:17.432 00:18:17.432 --- 10.0.0.1 ping statistics --- 00:18:17.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:17.432 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:18:17.432 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:17.432 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:17.432 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:18:17.432 00:18:17.432 --- 10.0.0.2 ping statistics --- 00:18:17.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:17.432 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:18:17.432 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:17.432 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:18:17.432 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:17.432 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:17.432 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:17.432 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:17.432 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:17.432 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:17.432 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:17.432 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:18:17.432 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:17.432 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:17.432 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:17.432 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=80930 00:18:17.432 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:17.432 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 80930 00:18:17.432 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@833 -- # '[' -z 80930 ']' 00:18:17.432 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:17.432 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:17.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:17.432 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:17.432 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:17.432 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:17.432 [2024-11-15 10:36:18.230930] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:18:17.432 [2024-11-15 10:36:18.231034] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:17.693 [2024-11-15 10:36:18.383577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:17.693 [2024-11-15 10:36:18.443332] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:17.693 [2024-11-15 10:36:18.443425] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:17.693 [2024-11-15 10:36:18.443438] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:17.693 [2024-11-15 10:36:18.443447] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:17.693 [2024-11-15 10:36:18.443454] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:17.693 [2024-11-15 10:36:18.444646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:17.693 [2024-11-15 10:36:18.444659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:17.693 [2024-11-15 10:36:18.500331] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:17.955 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:17.955 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@866 -- # return 0 00:18:17.955 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:17.955 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:17.955 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:17.955 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:17.955 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=80930 00:18:17.955 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:18.214 [2024-11-15 10:36:18.885566] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:18.214 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:18.473 Malloc0 00:18:18.473 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:18:18.732 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:18.991 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:19.250 [2024-11-15 10:36:19.962292] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:19.250 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:19.509 [2024-11-15 10:36:20.270468] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:18:19.509 10:36:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=80974 00:18:19.509 10:36:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:19.509 10:36:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 80974 /var/tmp/bdevperf.sock 00:18:19.509 10:36:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:18:19.509 10:36:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@833 -- # '[' -z 80974 ']' 00:18:19.509 10:36:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:19.509 10:36:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:19.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:19.509 10:36:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:19.509 10:36:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:19.509 10:36:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:20.886 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:20.886 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@866 -- # return 0 00:18:20.886 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:20.886 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:21.144 Nvme0n1 00:18:21.145 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:21.403 Nvme0n1 00:18:21.403 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:18:21.403 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:18:22.781 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:18:22.781 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:22.781 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:23.041 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:18:23.041 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81019 00:18:23.041 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80930 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:23.041 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:29.675 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:29.675 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:29.675 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:29.675 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:29.675 Attaching 4 probes... 00:18:29.675 @path[10.0.0.3, 4421]: 16591 00:18:29.675 @path[10.0.0.3, 4421]: 17129 00:18:29.675 @path[10.0.0.3, 4421]: 17121 00:18:29.675 @path[10.0.0.3, 4421]: 17050 00:18:29.675 @path[10.0.0.3, 4421]: 17320 00:18:29.675 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:29.675 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:29.675 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:29.675 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:29.675 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:29.675 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:29.675 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81019 00:18:29.675 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:29.675 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:18:29.675 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:29.934 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:18:30.193 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:18:30.193 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81138 00:18:30.193 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80930 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:30.193 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:36.826 10:36:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:18:36.826 10:36:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:36.826 10:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:18:36.826 10:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:36.826 Attaching 4 probes... 00:18:36.826 @path[10.0.0.3, 4420]: 16613 00:18:36.826 @path[10.0.0.3, 4420]: 15854 00:18:36.826 @path[10.0.0.3, 4420]: 16506 00:18:36.826 @path[10.0.0.3, 4420]: 16505 00:18:36.826 @path[10.0.0.3, 4420]: 17302 00:18:36.826 10:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:36.826 10:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:36.826 10:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:36.826 10:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:18:36.826 10:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:18:36.826 10:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:18:36.826 10:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81138 00:18:36.826 10:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:36.826 10:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:18:36.826 10:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:18:36.826 10:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:37.084 10:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:18:37.084 10:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81256 00:18:37.084 10:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80930 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:37.084 10:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:43.646 10:36:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:43.646 10:36:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:43.646 10:36:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:43.646 10:36:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:43.646 Attaching 4 probes... 00:18:43.646 @path[10.0.0.3, 4421]: 13778 00:18:43.646 @path[10.0.0.3, 4421]: 17269 00:18:43.646 @path[10.0.0.3, 4421]: 15888 00:18:43.646 @path[10.0.0.3, 4421]: 17059 00:18:43.646 @path[10.0.0.3, 4421]: 16970 00:18:43.646 10:36:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:43.646 10:36:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:43.646 10:36:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:43.646 10:36:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:43.647 10:36:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:43.647 10:36:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:43.647 10:36:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81256 00:18:43.647 10:36:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:43.647 10:36:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:18:43.647 10:36:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:18:43.647 10:36:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:18:44.215 10:36:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:18:44.215 10:36:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81373 00:18:44.215 10:36:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80930 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:44.215 10:36:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:50.876 10:36:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:50.876 10:36:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:18:50.876 10:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:18:50.876 10:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:50.876 Attaching 4 probes... 00:18:50.876 00:18:50.876 00:18:50.876 00:18:50.876 00:18:50.876 00:18:50.876 10:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:50.876 10:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:50.876 10:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:50.876 10:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:18:50.876 10:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:18:50.876 10:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:18:50.876 10:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81373 00:18:50.876 10:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:50.876 10:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:18:50.876 10:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:50.876 10:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:51.136 10:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:18:51.136 10:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81481 00:18:51.136 10:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80930 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:51.136 10:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:57.695 10:36:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:57.695 10:36:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:57.695 10:36:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:57.695 Attaching 4 probes... 00:18:57.695 @path[10.0.0.3, 4421]: 14807 00:18:57.695 @path[10.0.0.3, 4421]: 16958 00:18:57.695 @path[10.0.0.3, 4421]: 16128 00:18:57.695 @path[10.0.0.3, 4421]: 15192 00:18:57.695 @path[10.0.0.3, 4421]: 15344 00:18:57.695 10:36:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:57.695 10:36:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:57.695 10:36:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:57.695 10:36:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:57.695 10:36:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:57.695 10:36:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:57.695 10:36:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:57.695 10:36:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81481 00:18:57.695 10:36:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:57.695 10:36:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:57.695 10:36:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:18:58.632 10:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:18:58.632 10:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81606 00:18:58.632 10:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80930 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:58.632 10:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:05.191 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:05.191 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:19:05.191 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:19:05.191 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:05.191 Attaching 4 probes... 00:19:05.191 @path[10.0.0.3, 4420]: 16918 00:19:05.191 @path[10.0.0.3, 4420]: 17907 00:19:05.191 @path[10.0.0.3, 4420]: 17569 00:19:05.191 @path[10.0.0.3, 4420]: 17707 00:19:05.191 @path[10.0.0.3, 4420]: 17664 00:19:05.191 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:05.191 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:19:05.191 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:05.191 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:19:05.191 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:19:05.191 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:19:05.191 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81606 00:19:05.191 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:05.191 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:05.450 [2024-11-15 10:37:06.052874] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:05.450 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:05.708 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:19:12.273 10:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:19:12.273 10:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81785 00:19:12.273 10:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80930 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:12.273 10:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:17.540 10:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:17.540 10:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:17.801 10:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:17.801 10:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:17.802 Attaching 4 probes... 00:19:17.802 @path[10.0.0.3, 4421]: 16967 00:19:17.802 @path[10.0.0.3, 4421]: 17434 00:19:17.802 @path[10.0.0.3, 4421]: 17241 00:19:17.802 @path[10.0.0.3, 4421]: 17414 00:19:17.802 @path[10.0.0.3, 4421]: 17499 00:19:17.802 10:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:17.802 10:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:19:17.802 10:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:17.802 10:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:17.802 10:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:17.802 10:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:17.802 10:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81785 00:19:17.802 10:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:17.802 10:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 80974 00:19:17.802 10:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@952 -- # '[' -z 80974 ']' 00:19:17.802 10:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # kill -0 80974 00:19:17.802 10:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@957 -- # uname 00:19:17.802 10:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:17.802 10:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80974 00:19:17.802 killing process with pid 80974 00:19:17.802 10:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:17.802 10:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:17.802 10:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80974' 00:19:17.802 10:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@971 -- # kill 80974 00:19:17.802 10:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@976 -- # wait 80974 00:19:18.070 { 00:19:18.070 "results": [ 00:19:18.070 { 00:19:18.070 "job": "Nvme0n1", 00:19:18.070 "core_mask": "0x4", 00:19:18.070 "workload": "verify", 00:19:18.070 "status": "terminated", 00:19:18.070 "verify_range": { 00:19:18.070 "start": 0, 00:19:18.070 "length": 16384 00:19:18.070 }, 00:19:18.070 "queue_depth": 128, 00:19:18.070 "io_size": 4096, 00:19:18.070 "runtime": 56.308647, 00:19:18.070 "iops": 7202.144281676667, 00:19:18.070 "mibps": 28.13337610029948, 00:19:18.070 "io_failed": 0, 00:19:18.070 "io_timeout": 0, 00:19:18.070 "avg_latency_us": 17739.232793061066, 00:19:18.070 "min_latency_us": 173.14909090909092, 00:19:18.070 "max_latency_us": 7046430.72 00:19:18.070 } 00:19:18.070 ], 00:19:18.070 "core_count": 1 00:19:18.070 } 00:19:18.070 10:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 80974 00:19:18.070 10:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:18.070 [2024-11-15 10:36:20.341025] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:19:18.070 [2024-11-15 10:36:20.341136] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80974 ] 00:19:18.070 [2024-11-15 10:36:20.491043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.070 [2024-11-15 10:36:20.549887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:18.070 [2024-11-15 10:36:20.609809] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:18.070 Running I/O for 90 seconds... 00:19:18.070 6420.00 IOPS, 25.08 MiB/s [2024-11-15T10:37:18.923Z] 6507.50 IOPS, 25.42 MiB/s [2024-11-15T10:37:18.923Z] 7117.00 IOPS, 27.80 MiB/s [2024-11-15T10:37:18.923Z] 7481.75 IOPS, 29.23 MiB/s [2024-11-15T10:37:18.923Z] 7697.40 IOPS, 30.07 MiB/s [2024-11-15T10:37:18.923Z] 7833.17 IOPS, 30.60 MiB/s [2024-11-15T10:37:18.923Z] 7950.71 IOPS, 31.06 MiB/s [2024-11-15T10:37:18.923Z] 8041.88 IOPS, 31.41 MiB/s [2024-11-15T10:37:18.923Z] [2024-11-15 10:36:30.851021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:24888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.070 [2024-11-15 10:36:30.851115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:18.070 [2024-11-15 10:36:30.851181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:24896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.070 [2024-11-15 10:36:30.851205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:18.070 [2024-11-15 10:36:30.851229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:24904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.070 [2024-11-15 10:36:30.851245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:18.070 [2024-11-15 10:36:30.851268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:24912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.070 [2024-11-15 10:36:30.851284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:18.070 [2024-11-15 10:36:30.851306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:24920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.070 [2024-11-15 10:36:30.851321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:18.070 [2024-11-15 10:36:30.851354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:24928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.070 [2024-11-15 10:36:30.851373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:18.070 [2024-11-15 10:36:30.851396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:24936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.070 [2024-11-15 10:36:30.851412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:18.070 [2024-11-15 10:36:30.851435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.070 [2024-11-15 10:36:30.851450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:18.070 [2024-11-15 10:36:30.851490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:24952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.070 [2024-11-15 10:36:30.851512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:18.070 [2024-11-15 10:36:30.851536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:24960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.070 [2024-11-15 10:36:30.851585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:18.070 [2024-11-15 10:36:30.851609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.070 [2024-11-15 10:36:30.851625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:18.070 [2024-11-15 10:36:30.851659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:24976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.070 [2024-11-15 10:36:30.851675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:18.070 [2024-11-15 10:36:30.851696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:24984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.070 [2024-11-15 10:36:30.851711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:18.070 [2024-11-15 10:36:30.851733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:24992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.070 [2024-11-15 10:36:30.851748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:18.070 [2024-11-15 10:36:30.851779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:25000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.071 [2024-11-15 10:36:30.851796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:18.071 [2024-11-15 10:36:30.851818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:25008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.071 [2024-11-15 10:36:30.851834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:18.071 [2024-11-15 10:36:30.851856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:25016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.071 [2024-11-15 10:36:30.851872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.071 [2024-11-15 10:36:30.851894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:25024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.071 [2024-11-15 10:36:30.851910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:18.071 [2024-11-15 10:36:30.851931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:25032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.071 [2024-11-15 10:36:30.851946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:18.071 [2024-11-15 10:36:30.851968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:25040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.071 [2024-11-15 10:36:30.851983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:18.071 [2024-11-15 10:36:30.852004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:24376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.071 [2024-11-15 10:36:30.852028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:18.071 [2024-11-15 10:36:30.852062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:24384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.071 [2024-11-15 10:36:30.852090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:18.071 [2024-11-15 10:36:30.852115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:24392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.071 [2024-11-15 10:36:30.852131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:18.071 [2024-11-15 10:36:30.852153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:24400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.071 [2024-11-15 10:36:30.852168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:18.071 [2024-11-15 10:36:30.852190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:24408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.071 [2024-11-15 10:36:30.852206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:18.071 [2024-11-15 10:36:30.852228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:24416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.071 [2024-11-15 10:36:30.852244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:18.071 [2024-11-15 10:36:30.852265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:24424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.071 [2024-11-15 10:36:30.852281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:18.071 [2024-11-15 10:36:30.852303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:24432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.071 [2024-11-15 10:36:30.852318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:18.071 [2024-11-15 10:36:30.852340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:25048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.071 [2024-11-15 10:36:30.852355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:18.071 [2024-11-15 10:36:30.852377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:25056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.071 [2024-11-15 10:36:30.852392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:18.071 [2024-11-15 10:36:30.852413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:25064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.071 [2024-11-15 10:36:30.852429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:18.071 [2024-11-15 10:36:30.852451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:25072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.071 [2024-11-15 10:36:30.852467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:18.071 [2024-11-15 10:36:30.852493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:25080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.071 [2024-11-15 10:36:30.852509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:18.071 [2024-11-15 10:36:30.852532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:25088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.071 [2024-11-15 10:36:30.852548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:18.071 [2024-11-15 10:36:30.852578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:25096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.071 [2024-11-15 10:36:30.852594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:18.071 [2024-11-15 10:36:30.852616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:25104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.071 [2024-11-15 10:36:30.852634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:18.071 [2024-11-15 10:36:30.852656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:25112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.071 [2024-11-15 10:36:30.852671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:18.071 [2024-11-15 10:36:30.852693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:25120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.071 [2024-11-15 10:36:30.852714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:18.071 [2024-11-15 10:36:30.852735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:25128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.071 [2024-11-15 10:36:30.852751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:18.071 [2024-11-15 10:36:30.852772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:25136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.071 [2024-11-15 10:36:30.852788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:18.071 [2024-11-15 10:36:30.852809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.071 [2024-11-15 10:36:30.852825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:18.071 [2024-11-15 10:36:30.852846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:24448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.071 [2024-11-15 10:36:30.852862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:18.071 [2024-11-15 10:36:30.852883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:24456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.071 [2024-11-15 10:36:30.852898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:18.071 [2024-11-15 10:36:30.852920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.071 [2024-11-15 10:36:30.852935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:18.071 [2024-11-15 10:36:30.852958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:24472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.071 [2024-11-15 10:36:30.852973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:18.071 [2024-11-15 10:36:30.852995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:24480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.071 [2024-11-15 10:36:30.853010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:18.071 [2024-11-15 10:36:30.853039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:24488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.072 [2024-11-15 10:36:30.853088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:18.072 [2024-11-15 10:36:30.853113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:24496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.072 [2024-11-15 10:36:30.853130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:18.072 [2024-11-15 10:36:30.853167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.072 [2024-11-15 10:36:30.853187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:18.072 [2024-11-15 10:36:30.853211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:25152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.072 [2024-11-15 10:36:30.853228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:18.072 [2024-11-15 10:36:30.853250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:25160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.072 [2024-11-15 10:36:30.853265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:18.072 [2024-11-15 10:36:30.853287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:25168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.072 [2024-11-15 10:36:30.853302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:18.072 [2024-11-15 10:36:30.853324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:25176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.072 [2024-11-15 10:36:30.853340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:18.072 [2024-11-15 10:36:30.853362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:25184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.072 [2024-11-15 10:36:30.853378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:18.072 [2024-11-15 10:36:30.853399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:25192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.072 [2024-11-15 10:36:30.853415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:18.072 [2024-11-15 10:36:30.853436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:25200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.072 [2024-11-15 10:36:30.853452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:18.072 [2024-11-15 10:36:30.853473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:24504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.072 [2024-11-15 10:36:30.853488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:18.072 [2024-11-15 10:36:30.853510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:24512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.072 [2024-11-15 10:36:30.853525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:18.072 [2024-11-15 10:36:30.853547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:24520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.072 [2024-11-15 10:36:30.853572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:18.072 [2024-11-15 10:36:30.853595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:24528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.072 [2024-11-15 10:36:30.853611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:18.072 [2024-11-15 10:36:30.853632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:24536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.072 [2024-11-15 10:36:30.853648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:18.072 [2024-11-15 10:36:30.853670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:24544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.072 [2024-11-15 10:36:30.853685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:18.072 [2024-11-15 10:36:30.853708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.072 [2024-11-15 10:36:30.853723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:18.072 [2024-11-15 10:36:30.853746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.072 [2024-11-15 10:36:30.853761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:18.072 [2024-11-15 10:36:30.853786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:25208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.072 [2024-11-15 10:36:30.853810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:18.072 [2024-11-15 10:36:30.853831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:25216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.072 [2024-11-15 10:36:30.853847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:18.072 [2024-11-15 10:36:30.853868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:25224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.072 [2024-11-15 10:36:30.853884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:18.072 [2024-11-15 10:36:30.853905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:25232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.072 [2024-11-15 10:36:30.853921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:18.072 [2024-11-15 10:36:30.853943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:25240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.072 [2024-11-15 10:36:30.853958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:18.072 [2024-11-15 10:36:30.853980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:25248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.072 [2024-11-15 10:36:30.853995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:18.072 [2024-11-15 10:36:30.854017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.072 [2024-11-15 10:36:30.854039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:18.072 [2024-11-15 10:36:30.854090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:25264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.072 [2024-11-15 10:36:30.854108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:18.072 [2024-11-15 10:36:30.854130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:24568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.072 [2024-11-15 10:36:30.854156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:18.072 [2024-11-15 10:36:30.854178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:24576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.072 [2024-11-15 10:36:30.854194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:18.072 [2024-11-15 10:36:30.854215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:24584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.072 [2024-11-15 10:36:30.854239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:18.072 [2024-11-15 10:36:30.854262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:24592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.072 [2024-11-15 10:36:30.854278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:18.072 [2024-11-15 10:36:30.854300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.072 [2024-11-15 10:36:30.854323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:18.072 [2024-11-15 10:36:30.854345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.072 [2024-11-15 10:36:30.854361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:18.072 [2024-11-15 10:36:30.854382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:24616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.073 [2024-11-15 10:36:30.854397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:18.073 [2024-11-15 10:36:30.854419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:24624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.073 [2024-11-15 10:36:30.854434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:18.073 [2024-11-15 10:36:30.854456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:24632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.073 [2024-11-15 10:36:30.854471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:18.073 [2024-11-15 10:36:30.854493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.073 [2024-11-15 10:36:30.854509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:18.073 [2024-11-15 10:36:30.854539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:24648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.073 [2024-11-15 10:36:30.854555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:18.073 [2024-11-15 10:36:30.854585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:24656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.073 [2024-11-15 10:36:30.854601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:18.073 [2024-11-15 10:36:30.854623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.073 [2024-11-15 10:36:30.854647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:18.073 [2024-11-15 10:36:30.854669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.073 [2024-11-15 10:36:30.854685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:18.073 [2024-11-15 10:36:30.854710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.073 [2024-11-15 10:36:30.854725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:18.073 [2024-11-15 10:36:30.854747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:24688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.073 [2024-11-15 10:36:30.854762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:18.073 [2024-11-15 10:36:30.854784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:24696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.073 [2024-11-15 10:36:30.854799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:18.073 [2024-11-15 10:36:30.854821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:24704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.073 [2024-11-15 10:36:30.854836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:18.073 [2024-11-15 10:36:30.854858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:24712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.073 [2024-11-15 10:36:30.854879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:18.073 [2024-11-15 10:36:30.854902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:24720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.073 [2024-11-15 10:36:30.854917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:18.073 [2024-11-15 10:36:30.854938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:24728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.073 [2024-11-15 10:36:30.854954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:18.073 [2024-11-15 10:36:30.854975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:24736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.073 [2024-11-15 10:36:30.854991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:18.073 [2024-11-15 10:36:30.855013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.073 [2024-11-15 10:36:30.855028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:18.073 [2024-11-15 10:36:30.855079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:24752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.073 [2024-11-15 10:36:30.855109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:18.073 [2024-11-15 10:36:30.855147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:25272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.073 [2024-11-15 10:36:30.855168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:18.073 [2024-11-15 10:36:30.855192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:25280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.073 [2024-11-15 10:36:30.855208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:18.073 [2024-11-15 10:36:30.855236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.073 [2024-11-15 10:36:30.855252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:18.073 [2024-11-15 10:36:30.855274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:25296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.073 [2024-11-15 10:36:30.855289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:18.073 [2024-11-15 10:36:30.855311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:25304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.073 [2024-11-15 10:36:30.855327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:18.073 [2024-11-15 10:36:30.855370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:25312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.073 [2024-11-15 10:36:30.855389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:18.073 [2024-11-15 10:36:30.855411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:25320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.073 [2024-11-15 10:36:30.855427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:18.073 [2024-11-15 10:36:30.855448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:25328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.073 [2024-11-15 10:36:30.855464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:18.073 [2024-11-15 10:36:30.855486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.073 [2024-11-15 10:36:30.855501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:18.073 [2024-11-15 10:36:30.855529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.073 [2024-11-15 10:36:30.855545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:18.073 [2024-11-15 10:36:30.855567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:24776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.073 [2024-11-15 10:36:30.855589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:18.073 [2024-11-15 10:36:30.855611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.073 [2024-11-15 10:36:30.855640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:18.073 [2024-11-15 10:36:30.855664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:24792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.073 [2024-11-15 10:36:30.855680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:18.073 [2024-11-15 10:36:30.855701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:24800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.073 [2024-11-15 10:36:30.855716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:18.073 [2024-11-15 10:36:30.855738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:24808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.074 [2024-11-15 10:36:30.855754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:18.074 [2024-11-15 10:36:30.855775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.074 [2024-11-15 10:36:30.855791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:18.074 [2024-11-15 10:36:30.855812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:24824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.074 [2024-11-15 10:36:30.855827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:18.074 [2024-11-15 10:36:30.855849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:24832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.074 [2024-11-15 10:36:30.855864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:18.074 [2024-11-15 10:36:30.855892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:24840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.074 [2024-11-15 10:36:30.855908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:18.074 [2024-11-15 10:36:30.855930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:24848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.074 [2024-11-15 10:36:30.855945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:18.074 [2024-11-15 10:36:30.855967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:24856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.074 [2024-11-15 10:36:30.855983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:18.074 [2024-11-15 10:36:30.856015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:24864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.074 [2024-11-15 10:36:30.856040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:18.074 [2024-11-15 10:36:30.856075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:24872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.074 [2024-11-15 10:36:30.856094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:18.074 [2024-11-15 10:36:30.857562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:24880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.074 [2024-11-15 10:36:30.857606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:18.074 [2024-11-15 10:36:30.857638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:25336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.074 [2024-11-15 10:36:30.857657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:18.074 [2024-11-15 10:36:30.857680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:25344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.074 [2024-11-15 10:36:30.857696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:18.074 [2024-11-15 10:36:30.857718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:25352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.074 [2024-11-15 10:36:30.857739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:18.074 [2024-11-15 10:36:30.857762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:25360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.074 [2024-11-15 10:36:30.857777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:18.074 [2024-11-15 10:36:30.857799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:25368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.074 [2024-11-15 10:36:30.857815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:18.074 [2024-11-15 10:36:30.857836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:25376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.074 [2024-11-15 10:36:30.857852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:18.074 [2024-11-15 10:36:30.857874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.074 [2024-11-15 10:36:30.857889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:18.074 [2024-11-15 10:36:30.857926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:25392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.074 [2024-11-15 10:36:30.857947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:18.074 8091.56 IOPS, 31.61 MiB/s [2024-11-15T10:37:18.927Z] 8124.00 IOPS, 31.73 MiB/s [2024-11-15T10:37:18.927Z] 8105.45 IOPS, 31.66 MiB/s [2024-11-15T10:37:18.927Z] 8115.33 IOPS, 31.70 MiB/s [2024-11-15T10:37:18.927Z] 8127.69 IOPS, 31.75 MiB/s [2024-11-15T10:37:18.927Z] 8162.86 IOPS, 31.89 MiB/s [2024-11-15T10:37:18.927Z] 8194.67 IOPS, 32.01 MiB/s [2024-11-15T10:37:18.927Z] [2024-11-15 10:36:37.517659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.074 [2024-11-15 10:36:37.517737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:18.074 [2024-11-15 10:36:37.517816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:77640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.074 [2024-11-15 10:36:37.517840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:18.074 [2024-11-15 10:36:37.517863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:77648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.074 [2024-11-15 10:36:37.517879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:18.074 [2024-11-15 10:36:37.517934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:77656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.074 [2024-11-15 10:36:37.517950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:18.074 [2024-11-15 10:36:37.517972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:77664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.074 [2024-11-15 10:36:37.517987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:18.074 [2024-11-15 10:36:37.518008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:77672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.074 [2024-11-15 10:36:37.518023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:18.074 [2024-11-15 10:36:37.518044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.074 [2024-11-15 10:36:37.518058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:18.074 [2024-11-15 10:36:37.518096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:77688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.074 [2024-11-15 10:36:37.518112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:18.074 [2024-11-15 10:36:37.518133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:77312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.074 [2024-11-15 10:36:37.518149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:18.074 [2024-11-15 10:36:37.518170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:77320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.074 [2024-11-15 10:36:37.518184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:18.074 [2024-11-15 10:36:37.518206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:77328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.074 [2024-11-15 10:36:37.518220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:18.074 [2024-11-15 10:36:37.518242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:77336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.074 [2024-11-15 10:36:37.518256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:18.074 [2024-11-15 10:36:37.518277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:77344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.074 [2024-11-15 10:36:37.518292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:18.074 [2024-11-15 10:36:37.518313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:77352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.075 [2024-11-15 10:36:37.518328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:18.075 [2024-11-15 10:36:37.518349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:77360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.075 [2024-11-15 10:36:37.518364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:18.075 [2024-11-15 10:36:37.518385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:77368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.075 [2024-11-15 10:36:37.518411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:18.075 [2024-11-15 10:36:37.518597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:77696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.075 [2024-11-15 10:36:37.518623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:18.075 [2024-11-15 10:36:37.518647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:77704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.075 [2024-11-15 10:36:37.518662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:18.075 [2024-11-15 10:36:37.518684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:77712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.075 [2024-11-15 10:36:37.518699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:18.075 [2024-11-15 10:36:37.518720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:77720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.075 [2024-11-15 10:36:37.518735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:18.075 [2024-11-15 10:36:37.518756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:77728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.075 [2024-11-15 10:36:37.518771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:18.075 [2024-11-15 10:36:37.518792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:77736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.075 [2024-11-15 10:36:37.518807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:18.075 [2024-11-15 10:36:37.518829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:77744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.075 [2024-11-15 10:36:37.518843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:18.075 [2024-11-15 10:36:37.518865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:77752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.075 [2024-11-15 10:36:37.518882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:18.075 [2024-11-15 10:36:37.518905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:77376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.075 [2024-11-15 10:36:37.518920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:18.075 [2024-11-15 10:36:37.518941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:77384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.075 [2024-11-15 10:36:37.518956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:18.075 [2024-11-15 10:36:37.518977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:77392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.075 [2024-11-15 10:36:37.518993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:18.075 [2024-11-15 10:36:37.519014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:77400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.075 [2024-11-15 10:36:37.519039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:18.075 [2024-11-15 10:36:37.519078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:77408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.075 [2024-11-15 10:36:37.519095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:18.075 [2024-11-15 10:36:37.519118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:77416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.075 [2024-11-15 10:36:37.519133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:18.075 [2024-11-15 10:36:37.519155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:77424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.075 [2024-11-15 10:36:37.519169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:18.075 [2024-11-15 10:36:37.519191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:77432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.075 [2024-11-15 10:36:37.519206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:18.075 [2024-11-15 10:36:37.519228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:77760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.075 [2024-11-15 10:36:37.519242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:18.075 [2024-11-15 10:36:37.519264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:77768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.075 [2024-11-15 10:36:37.519279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:18.075 [2024-11-15 10:36:37.519300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:77776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.075 [2024-11-15 10:36:37.519315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:18.075 [2024-11-15 10:36:37.519348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.075 [2024-11-15 10:36:37.519366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:18.075 [2024-11-15 10:36:37.519394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:77792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.075 [2024-11-15 10:36:37.519409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:18.075 [2024-11-15 10:36:37.519431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:77800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.075 [2024-11-15 10:36:37.519446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:18.075 [2024-11-15 10:36:37.519468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:77808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.075 [2024-11-15 10:36:37.519483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:18.075 [2024-11-15 10:36:37.519504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:77816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.075 [2024-11-15 10:36:37.519528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:18.075 [2024-11-15 10:36:37.519559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:77440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.075 [2024-11-15 10:36:37.519575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:18.076 [2024-11-15 10:36:37.519597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:77448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.076 [2024-11-15 10:36:37.519612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:18.076 [2024-11-15 10:36:37.519633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:77456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.076 [2024-11-15 10:36:37.519648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:18.076 [2024-11-15 10:36:37.519670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.076 [2024-11-15 10:36:37.519684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:18.076 [2024-11-15 10:36:37.519706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:77472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.076 [2024-11-15 10:36:37.519721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:18.076 [2024-11-15 10:36:37.519742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:77480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.076 [2024-11-15 10:36:37.519757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:18.076 [2024-11-15 10:36:37.519779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:77488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.076 [2024-11-15 10:36:37.519793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:18.076 [2024-11-15 10:36:37.519814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:77496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.076 [2024-11-15 10:36:37.519829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:18.076 [2024-11-15 10:36:37.519851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:77824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.076 [2024-11-15 10:36:37.519866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:18.076 [2024-11-15 10:36:37.519887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:77832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.076 [2024-11-15 10:36:37.519902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:18.076 [2024-11-15 10:36:37.519924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:77840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.076 [2024-11-15 10:36:37.519939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:18.076 [2024-11-15 10:36:37.519960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:77848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.076 [2024-11-15 10:36:37.519975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:18.076 [2024-11-15 10:36:37.520003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:77856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.076 [2024-11-15 10:36:37.520019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:18.076 [2024-11-15 10:36:37.520042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:77864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.076 [2024-11-15 10:36:37.520072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:18.076 [2024-11-15 10:36:37.520097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:77872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.076 [2024-11-15 10:36:37.520112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:18.076 [2024-11-15 10:36:37.520134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:77880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.076 [2024-11-15 10:36:37.520149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:18.076 [2024-11-15 10:36:37.520190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:77888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.076 [2024-11-15 10:36:37.520210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:18.076 [2024-11-15 10:36:37.520233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:77896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.076 [2024-11-15 10:36:37.520249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:18.076 [2024-11-15 10:36:37.520271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.076 [2024-11-15 10:36:37.520286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:18.076 [2024-11-15 10:36:37.520307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:77912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.076 [2024-11-15 10:36:37.520322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:18.076 [2024-11-15 10:36:37.520343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:77920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.076 [2024-11-15 10:36:37.520358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:18.076 [2024-11-15 10:36:37.520380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:77928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.076 [2024-11-15 10:36:37.520394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:18.076 [2024-11-15 10:36:37.520416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:77936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.076 [2024-11-15 10:36:37.520431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:18.076 [2024-11-15 10:36:37.520452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:77944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.076 [2024-11-15 10:36:37.520467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:18.076 [2024-11-15 10:36:37.520489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:77952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.076 [2024-11-15 10:36:37.520513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:18.076 [2024-11-15 10:36:37.520537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:77960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.076 [2024-11-15 10:36:37.520552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:18.076 [2024-11-15 10:36:37.520573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:77968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.076 [2024-11-15 10:36:37.520588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:18.076 [2024-11-15 10:36:37.520610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:77976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.076 [2024-11-15 10:36:37.520625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:18.076 [2024-11-15 10:36:37.520646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:77984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.076 [2024-11-15 10:36:37.520661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.076 [2024-11-15 10:36:37.520683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:77992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.076 [2024-11-15 10:36:37.520698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:18.076 [2024-11-15 10:36:37.520720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:78000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.076 [2024-11-15 10:36:37.520735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:18.076 [2024-11-15 10:36:37.520757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:78008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.076 [2024-11-15 10:36:37.520773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:18.076 [2024-11-15 10:36:37.520794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:77504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.076 [2024-11-15 10:36:37.520809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:18.077 [2024-11-15 10:36:37.520831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:77512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.077 [2024-11-15 10:36:37.520846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:18.077 [2024-11-15 10:36:37.520867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:77520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.077 [2024-11-15 10:36:37.520882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:18.077 [2024-11-15 10:36:37.520904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:77528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.077 [2024-11-15 10:36:37.520918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:18.077 [2024-11-15 10:36:37.520940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:77536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.077 [2024-11-15 10:36:37.520980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:18.077 [2024-11-15 10:36:37.521005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.077 [2024-11-15 10:36:37.521020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:18.077 [2024-11-15 10:36:37.521042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:77552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.077 [2024-11-15 10:36:37.521071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:18.077 [2024-11-15 10:36:37.521095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:77560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.077 [2024-11-15 10:36:37.521111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:18.077 [2024-11-15 10:36:37.521132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:78016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.077 [2024-11-15 10:36:37.521147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:18.077 [2024-11-15 10:36:37.521168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:78024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.077 [2024-11-15 10:36:37.521183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:18.077 [2024-11-15 10:36:37.521205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:78032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.077 [2024-11-15 10:36:37.521219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:18.077 [2024-11-15 10:36:37.521241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:78040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.077 [2024-11-15 10:36:37.521255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:18.077 [2024-11-15 10:36:37.521277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:78048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.077 [2024-11-15 10:36:37.521292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:18.077 [2024-11-15 10:36:37.521315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.077 [2024-11-15 10:36:37.521330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:18.077 [2024-11-15 10:36:37.521351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:78064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.077 [2024-11-15 10:36:37.521366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:18.077 [2024-11-15 10:36:37.521388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:78072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.077 [2024-11-15 10:36:37.521403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:18.077 [2024-11-15 10:36:37.521424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:78080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.077 [2024-11-15 10:36:37.521439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:18.077 [2024-11-15 10:36:37.521469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.077 [2024-11-15 10:36:37.521485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:18.077 [2024-11-15 10:36:37.521507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.077 [2024-11-15 10:36:37.521521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:18.077 [2024-11-15 10:36:37.521543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:78104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.077 [2024-11-15 10:36:37.521557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:18.077 [2024-11-15 10:36:37.521578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:78112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.077 [2024-11-15 10:36:37.521593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:18.077 [2024-11-15 10:36:37.521614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:78120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.077 [2024-11-15 10:36:37.521629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:18.077 [2024-11-15 10:36:37.521650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:78128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.077 [2024-11-15 10:36:37.521664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:18.077 [2024-11-15 10:36:37.521686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:78136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.077 [2024-11-15 10:36:37.521701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:18.077 [2024-11-15 10:36:37.521722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.077 [2024-11-15 10:36:37.521737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:18.077 [2024-11-15 10:36:37.521758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:77576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.077 [2024-11-15 10:36:37.521772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:18.077 [2024-11-15 10:36:37.521794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:77584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.077 [2024-11-15 10:36:37.521808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:18.077 [2024-11-15 10:36:37.521829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.077 [2024-11-15 10:36:37.521844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:18.077 [2024-11-15 10:36:37.521865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:77600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.077 [2024-11-15 10:36:37.521880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:18.077 [2024-11-15 10:36:37.521909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:77608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.077 [2024-11-15 10:36:37.521924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:18.077 [2024-11-15 10:36:37.521946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:77616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.077 [2024-11-15 10:36:37.521961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:18.077 [2024-11-15 10:36:37.522726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:77624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.077 [2024-11-15 10:36:37.522756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:18.077 [2024-11-15 10:36:37.522790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:78144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.077 [2024-11-15 10:36:37.522807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:18.077 [2024-11-15 10:36:37.522836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:78152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.078 [2024-11-15 10:36:37.522852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:18.078 [2024-11-15 10:36:37.522880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:78160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.078 [2024-11-15 10:36:37.522895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:18.078 [2024-11-15 10:36:37.522923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:78168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.078 [2024-11-15 10:36:37.522938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:18.078 [2024-11-15 10:36:37.522966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:78176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.078 [2024-11-15 10:36:37.522981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:18.078 [2024-11-15 10:36:37.523008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:78184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.078 [2024-11-15 10:36:37.523023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:18.078 [2024-11-15 10:36:37.523065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:78192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.078 [2024-11-15 10:36:37.523084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:18.078 [2024-11-15 10:36:37.523129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:78200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.078 [2024-11-15 10:36:37.523149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:18.078 [2024-11-15 10:36:37.523178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:78208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.078 [2024-11-15 10:36:37.523195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:18.078 [2024-11-15 10:36:37.523223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:78216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.078 [2024-11-15 10:36:37.523251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:18.078 [2024-11-15 10:36:37.523281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:78224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.078 [2024-11-15 10:36:37.523297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:18.078 [2024-11-15 10:36:37.523325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.078 [2024-11-15 10:36:37.523361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:18.078 [2024-11-15 10:36:37.523400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.078 [2024-11-15 10:36:37.523416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:18.078 [2024-11-15 10:36:37.523445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:78248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.078 [2024-11-15 10:36:37.523460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:18.078 [2024-11-15 10:36:37.523488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:78256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.078 [2024-11-15 10:36:37.523504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:18.078 [2024-11-15 10:36:37.523535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:78264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.078 [2024-11-15 10:36:37.523551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:18.078 [2024-11-15 10:36:37.523579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.078 [2024-11-15 10:36:37.523595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:18.078 [2024-11-15 10:36:37.523622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:78280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.078 [2024-11-15 10:36:37.523637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:18.078 [2024-11-15 10:36:37.523665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:78288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.078 [2024-11-15 10:36:37.523680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:18.078 [2024-11-15 10:36:37.523708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:78296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.078 [2024-11-15 10:36:37.523723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:18.078 7772.00 IOPS, 30.36 MiB/s [2024-11-15T10:37:18.931Z] 7735.59 IOPS, 30.22 MiB/s [2024-11-15T10:37:18.931Z] 7783.17 IOPS, 30.40 MiB/s [2024-11-15T10:37:18.931Z] 7793.32 IOPS, 30.44 MiB/s [2024-11-15T10:37:18.931Z] 7829.65 IOPS, 30.58 MiB/s [2024-11-15T10:37:18.931Z] 7861.76 IOPS, 30.71 MiB/s [2024-11-15T10:37:18.931Z] 7892.41 IOPS, 30.83 MiB/s [2024-11-15T10:37:18.931Z] [2024-11-15 10:36:44.775959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:107592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.078 [2024-11-15 10:36:44.776041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:18.078 [2024-11-15 10:36:44.776151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:107600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.078 [2024-11-15 10:36:44.776175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:18.078 [2024-11-15 10:36:44.776199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:107608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.078 [2024-11-15 10:36:44.776214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:18.078 [2024-11-15 10:36:44.776235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:107616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.078 [2024-11-15 10:36:44.776250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:18.078 [2024-11-15 10:36:44.776272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:107624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.078 [2024-11-15 10:36:44.776287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:18.078 [2024-11-15 10:36:44.776308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:107632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.078 [2024-11-15 10:36:44.776323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:18.078 [2024-11-15 10:36:44.776344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:107640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.078 [2024-11-15 10:36:44.776358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:18.078 [2024-11-15 10:36:44.776379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:107648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.078 [2024-11-15 10:36:44.776394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:18.078 [2024-11-15 10:36:44.776415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:107208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.078 [2024-11-15 10:36:44.776430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:18.078 [2024-11-15 10:36:44.776451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:107216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.078 [2024-11-15 10:36:44.776466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:18.078 [2024-11-15 10:36:44.776498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.078 [2024-11-15 10:36:44.776513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:18.078 [2024-11-15 10:36:44.776533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:107232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.078 [2024-11-15 10:36:44.776548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:18.078 [2024-11-15 10:36:44.776569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:107240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.079 [2024-11-15 10:36:44.776584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:18.079 [2024-11-15 10:36:44.776626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:107248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.079 [2024-11-15 10:36:44.776645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:18.079 [2024-11-15 10:36:44.776673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:107256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.079 [2024-11-15 10:36:44.776688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:18.079 [2024-11-15 10:36:44.776713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:107264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.079 [2024-11-15 10:36:44.776728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:18.079 [2024-11-15 10:36:44.776768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:107656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.079 [2024-11-15 10:36:44.776789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:18.079 [2024-11-15 10:36:44.776812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:107664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.079 [2024-11-15 10:36:44.776828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:18.079 [2024-11-15 10:36:44.776851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:107672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.079 [2024-11-15 10:36:44.776867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:18.079 [2024-11-15 10:36:44.776888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:107680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.079 [2024-11-15 10:36:44.776903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:18.079 [2024-11-15 10:36:44.776925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:107688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.079 [2024-11-15 10:36:44.776940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:18.079 [2024-11-15 10:36:44.776963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:107696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.079 [2024-11-15 10:36:44.776978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:18.079 [2024-11-15 10:36:44.777000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:107704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.079 [2024-11-15 10:36:44.777015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:18.079 [2024-11-15 10:36:44.777043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:107712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.079 [2024-11-15 10:36:44.777075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:18.079 [2024-11-15 10:36:44.777104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:107272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.079 [2024-11-15 10:36:44.777120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:18.079 [2024-11-15 10:36:44.777161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:107280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.079 [2024-11-15 10:36:44.777178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:18.079 [2024-11-15 10:36:44.777200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:107288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.079 [2024-11-15 10:36:44.777215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:18.079 [2024-11-15 10:36:44.777236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:107296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.079 [2024-11-15 10:36:44.777251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:18.079 [2024-11-15 10:36:44.777272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:107304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.079 [2024-11-15 10:36:44.777287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:18.079 [2024-11-15 10:36:44.777308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:107312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.079 [2024-11-15 10:36:44.777323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:18.079 [2024-11-15 10:36:44.777344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:107320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.079 [2024-11-15 10:36:44.777359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:18.079 [2024-11-15 10:36:44.777380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:107328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.079 [2024-11-15 10:36:44.777395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:18.079 [2024-11-15 10:36:44.777416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:107720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.079 [2024-11-15 10:36:44.777430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:18.079 [2024-11-15 10:36:44.777452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:107728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.079 [2024-11-15 10:36:44.777466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:18.079 [2024-11-15 10:36:44.777489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:107736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.079 [2024-11-15 10:36:44.777504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:18.079 [2024-11-15 10:36:44.777526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:107744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.079 [2024-11-15 10:36:44.777541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:18.079 [2024-11-15 10:36:44.777562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:107752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.079 [2024-11-15 10:36:44.777576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:18.079 [2024-11-15 10:36:44.777598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:107760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.079 [2024-11-15 10:36:44.777628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:18.079 [2024-11-15 10:36:44.777651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:107768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.079 [2024-11-15 10:36:44.777666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:18.079 [2024-11-15 10:36:44.777687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:107776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.079 [2024-11-15 10:36:44.777702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:18.079 [2024-11-15 10:36:44.777723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:107784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.079 [2024-11-15 10:36:44.777738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:18.079 [2024-11-15 10:36:44.777759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:107792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.079 [2024-11-15 10:36:44.777774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:18.079 [2024-11-15 10:36:44.777795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:107800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.079 [2024-11-15 10:36:44.777810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:18.079 [2024-11-15 10:36:44.777831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:107808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.079 [2024-11-15 10:36:44.777846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:18.079 [2024-11-15 10:36:44.777867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:107816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.079 [2024-11-15 10:36:44.777882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:18.080 [2024-11-15 10:36:44.777903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:107824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.080 [2024-11-15 10:36:44.777918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:18.080 [2024-11-15 10:36:44.777939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:107832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.080 [2024-11-15 10:36:44.777954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:18.080 [2024-11-15 10:36:44.777975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:107840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.080 [2024-11-15 10:36:44.777990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:18.080 [2024-11-15 10:36:44.778016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:107848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.080 [2024-11-15 10:36:44.778032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:18.080 [2024-11-15 10:36:44.778065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:107856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.080 [2024-11-15 10:36:44.778093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:18.080 [2024-11-15 10:36:44.778123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:107864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.080 [2024-11-15 10:36:44.778139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:18.080 [2024-11-15 10:36:44.778161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:107872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.080 [2024-11-15 10:36:44.778176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:18.080 [2024-11-15 10:36:44.778197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:107880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.080 [2024-11-15 10:36:44.778211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:18.080 [2024-11-15 10:36:44.778233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:107888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.080 [2024-11-15 10:36:44.778251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:18.080 [2024-11-15 10:36:44.778272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:107896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.080 [2024-11-15 10:36:44.778287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:18.080 [2024-11-15 10:36:44.778308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:107904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.080 [2024-11-15 10:36:44.778323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:18.080 [2024-11-15 10:36:44.778344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:107336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.080 [2024-11-15 10:36:44.778359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:18.080 [2024-11-15 10:36:44.778380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:107344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.080 [2024-11-15 10:36:44.778394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:18.080 [2024-11-15 10:36:44.778416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:107352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.080 [2024-11-15 10:36:44.778431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:18.080 [2024-11-15 10:36:44.778452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:107360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.080 [2024-11-15 10:36:44.778467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:18.080 [2024-11-15 10:36:44.778488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:107368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.080 [2024-11-15 10:36:44.778502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:18.080 [2024-11-15 10:36:44.778524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:107376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.080 [2024-11-15 10:36:44.778539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:18.080 [2024-11-15 10:36:44.778567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:107384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.080 [2024-11-15 10:36:44.778583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:18.080 [2024-11-15 10:36:44.778605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:107392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.080 [2024-11-15 10:36:44.778621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:18.080 [2024-11-15 10:36:44.778659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:107912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.080 [2024-11-15 10:36:44.778680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:18.080 [2024-11-15 10:36:44.778703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:107920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.080 [2024-11-15 10:36:44.778719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:18.080 [2024-11-15 10:36:44.778740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:107928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.080 [2024-11-15 10:36:44.778755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:18.080 [2024-11-15 10:36:44.778776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:107936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.080 [2024-11-15 10:36:44.778791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:18.080 [2024-11-15 10:36:44.778812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:107944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.080 [2024-11-15 10:36:44.778828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:18.080 [2024-11-15 10:36:44.778849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:107952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.080 [2024-11-15 10:36:44.778864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:18.080 [2024-11-15 10:36:44.778885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:107960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.080 [2024-11-15 10:36:44.778900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:18.080 [2024-11-15 10:36:44.778921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:107968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.080 [2024-11-15 10:36:44.778936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:18.080 [2024-11-15 10:36:44.778958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:107976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.080 [2024-11-15 10:36:44.778972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:18.080 [2024-11-15 10:36:44.778994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:107984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.080 [2024-11-15 10:36:44.779008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:18.080 [2024-11-15 10:36:44.779039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.080 [2024-11-15 10:36:44.779071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:18.080 [2024-11-15 10:36:44.779096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:108000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.081 [2024-11-15 10:36:44.779112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:18.081 [2024-11-15 10:36:44.779133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:108008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.081 [2024-11-15 10:36:44.779148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:18.081 [2024-11-15 10:36:44.779169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:108016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.081 [2024-11-15 10:36:44.779184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:18.081 [2024-11-15 10:36:44.779206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:108024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.081 [2024-11-15 10:36:44.779221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:18.081 [2024-11-15 10:36:44.779243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:108032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.081 [2024-11-15 10:36:44.779258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:18.081 [2024-11-15 10:36:44.779280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:107400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.081 [2024-11-15 10:36:44.779294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:18.081 [2024-11-15 10:36:44.779316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:107408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.081 [2024-11-15 10:36:44.779346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:18.081 [2024-11-15 10:36:44.779372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:107416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.081 [2024-11-15 10:36:44.779387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:18.081 [2024-11-15 10:36:44.779408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:107424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.081 [2024-11-15 10:36:44.779424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:18.081 [2024-11-15 10:36:44.779445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:107432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.081 [2024-11-15 10:36:44.779460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:18.081 [2024-11-15 10:36:44.779481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:107440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.081 [2024-11-15 10:36:44.779496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:18.081 [2024-11-15 10:36:44.779526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:107448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.081 [2024-11-15 10:36:44.779543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.081 [2024-11-15 10:36:44.779565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:107456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.081 [2024-11-15 10:36:44.779580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:18.081 [2024-11-15 10:36:44.779601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:107464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.081 [2024-11-15 10:36:44.779616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:18.081 [2024-11-15 10:36:44.779637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:107472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.081 [2024-11-15 10:36:44.779652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:18.081 [2024-11-15 10:36:44.779673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:107480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.081 [2024-11-15 10:36:44.779692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:18.081 [2024-11-15 10:36:44.779713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:107488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.081 [2024-11-15 10:36:44.779728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:18.081 [2024-11-15 10:36:44.779749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:107496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.081 [2024-11-15 10:36:44.779764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:18.081 [2024-11-15 10:36:44.779785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:107504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.081 [2024-11-15 10:36:44.779799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:18.081 [2024-11-15 10:36:44.779820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:107512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.081 [2024-11-15 10:36:44.779835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:18.081 [2024-11-15 10:36:44.779857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:107520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.081 [2024-11-15 10:36:44.779880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:18.081 [2024-11-15 10:36:44.779906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:108040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.081 [2024-11-15 10:36:44.779923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:18.081 [2024-11-15 10:36:44.779944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:108048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.081 [2024-11-15 10:36:44.779959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:18.081 [2024-11-15 10:36:44.779981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:108056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.081 [2024-11-15 10:36:44.780003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:18.081 [2024-11-15 10:36:44.780026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:108064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.081 [2024-11-15 10:36:44.780042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:18.081 [2024-11-15 10:36:44.780077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:108072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.081 [2024-11-15 10:36:44.780094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:18.081 [2024-11-15 10:36:44.780115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:108080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.081 [2024-11-15 10:36:44.780130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:18.081 [2024-11-15 10:36:44.780152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:108088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.081 [2024-11-15 10:36:44.780166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:18.081 [2024-11-15 10:36:44.780188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:108096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.081 [2024-11-15 10:36:44.780203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:18.081 [2024-11-15 10:36:44.780224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:108104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.081 [2024-11-15 10:36:44.780238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:18.081 [2024-11-15 10:36:44.780259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:108112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.081 [2024-11-15 10:36:44.780275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:18.082 [2024-11-15 10:36:44.780296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:108120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.082 [2024-11-15 10:36:44.780311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:18.082 [2024-11-15 10:36:44.780332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:108128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.082 [2024-11-15 10:36:44.780347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:18.082 [2024-11-15 10:36:44.780367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:108136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.082 [2024-11-15 10:36:44.780392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:18.082 [2024-11-15 10:36:44.780413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:108144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.082 [2024-11-15 10:36:44.780427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:18.082 [2024-11-15 10:36:44.780448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:108152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.082 [2024-11-15 10:36:44.780471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:18.082 [2024-11-15 10:36:44.780494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:108160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.082 [2024-11-15 10:36:44.780516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:18.082 [2024-11-15 10:36:44.780538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:107528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.082 [2024-11-15 10:36:44.780553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:18.082 [2024-11-15 10:36:44.780575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:107536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.082 [2024-11-15 10:36:44.780589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:18.082 [2024-11-15 10:36:44.780611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:107544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.082 [2024-11-15 10:36:44.780626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:18.082 [2024-11-15 10:36:44.780647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:107552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.082 [2024-11-15 10:36:44.780662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:18.082 [2024-11-15 10:36:44.780683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:107560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.082 [2024-11-15 10:36:44.780707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:18.082 [2024-11-15 10:36:44.780729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:107568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.082 [2024-11-15 10:36:44.780743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:18.082 [2024-11-15 10:36:44.780764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:107576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.082 [2024-11-15 10:36:44.780779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:18.082 [2024-11-15 10:36:44.781518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:107584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.082 [2024-11-15 10:36:44.781549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:18.082 [2024-11-15 10:36:44.781584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:108168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.082 [2024-11-15 10:36:44.781602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:18.082 [2024-11-15 10:36:44.781632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:108176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.082 [2024-11-15 10:36:44.781648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:18.082 [2024-11-15 10:36:44.781677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:108184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.082 [2024-11-15 10:36:44.781694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:18.082 [2024-11-15 10:36:44.781736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:108192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.082 [2024-11-15 10:36:44.781753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:18.082 [2024-11-15 10:36:44.781783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:108200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.082 [2024-11-15 10:36:44.781798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:18.082 [2024-11-15 10:36:44.781827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:108208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.082 [2024-11-15 10:36:44.781843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:18.082 [2024-11-15 10:36:44.781872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:108216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.082 [2024-11-15 10:36:44.781888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:18.082 [2024-11-15 10:36:44.781933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:108224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.082 [2024-11-15 10:36:44.781955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:18.082 7706.13 IOPS, 30.10 MiB/s [2024-11-15T10:37:18.935Z] 7385.04 IOPS, 28.85 MiB/s [2024-11-15T10:37:18.935Z] 7089.64 IOPS, 27.69 MiB/s [2024-11-15T10:37:18.935Z] 6816.96 IOPS, 26.63 MiB/s [2024-11-15T10:37:18.935Z] 6564.48 IOPS, 25.64 MiB/s [2024-11-15T10:37:18.935Z] 6330.04 IOPS, 24.73 MiB/s [2024-11-15T10:37:18.935Z] 6111.76 IOPS, 23.87 MiB/s [2024-11-15T10:37:18.935Z] 6053.57 IOPS, 23.65 MiB/s [2024-11-15T10:37:18.935Z] 6098.03 IOPS, 23.82 MiB/s [2024-11-15T10:37:18.935Z] 6171.97 IOPS, 24.11 MiB/s [2024-11-15T10:37:18.935Z] 6231.73 IOPS, 24.34 MiB/s [2024-11-15T10:37:18.935Z] 6270.56 IOPS, 24.49 MiB/s [2024-11-15T10:37:18.935Z] 6310.60 IOPS, 24.65 MiB/s [2024-11-15T10:37:18.935Z] 6369.08 IOPS, 24.88 MiB/s [2024-11-15T10:37:18.935Z] [2024-11-15 10:36:58.385037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.082 [2024-11-15 10:36:58.385118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:18.082 [2024-11-15 10:36:58.385190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:1472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.082 [2024-11-15 10:36:58.385213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:18.082 [2024-11-15 10:36:58.385236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:1480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.082 [2024-11-15 10:36:58.385251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:18.082 [2024-11-15 10:36:58.385273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.082 [2024-11-15 10:36:58.385288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:18.082 [2024-11-15 10:36:58.385309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.082 [2024-11-15 10:36:58.385324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:18.082 [2024-11-15 10:36:58.385345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:1504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.082 [2024-11-15 10:36:58.385382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:18.082 [2024-11-15 10:36:58.385407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:1512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.082 [2024-11-15 10:36:58.385422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:18.082 [2024-11-15 10:36:58.385443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:1520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.082 [2024-11-15 10:36:58.385457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:18.083 [2024-11-15 10:36:58.385478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:1528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.083 [2024-11-15 10:36:58.385493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:18.083 [2024-11-15 10:36:58.385514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.083 [2024-11-15 10:36:58.385528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:18.083 [2024-11-15 10:36:58.385549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:1544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.083 [2024-11-15 10:36:58.385564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:18.083 [2024-11-15 10:36:58.385585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:1552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.083 [2024-11-15 10:36:58.385599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:18.083 [2024-11-15 10:36:58.385620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.083 [2024-11-15 10:36:58.385635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:18.083 [2024-11-15 10:36:58.385656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:1568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.083 [2024-11-15 10:36:58.385671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:18.083 [2024-11-15 10:36:58.385692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.083 [2024-11-15 10:36:58.385707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:18.083 [2024-11-15 10:36:58.385728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.083 [2024-11-15 10:36:58.385743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:18.083 [2024-11-15 10:36:58.385764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:1016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.083 [2024-11-15 10:36:58.385779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:18.083 [2024-11-15 10:36:58.385800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.083 [2024-11-15 10:36:58.385818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:18.083 [2024-11-15 10:36:58.385848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.083 [2024-11-15 10:36:58.385865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:18.083 [2024-11-15 10:36:58.385886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.083 [2024-11-15 10:36:58.385901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:18.083 [2024-11-15 10:36:58.385922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:1048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.083 [2024-11-15 10:36:58.385937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:18.083 [2024-11-15 10:36:58.385958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.083 [2024-11-15 10:36:58.385973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:18.083 [2024-11-15 10:36:58.385994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:1064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.083 [2024-11-15 10:36:58.386009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:18.083 [2024-11-15 10:36:58.386030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.083 [2024-11-15 10:36:58.386044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:18.083 [2024-11-15 10:36:58.386113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.083 [2024-11-15 10:36:58.386135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.083 [2024-11-15 10:36:58.386152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.083 [2024-11-15 10:36:58.386166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.083 [2024-11-15 10:36:58.386181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.083 [2024-11-15 10:36:58.386196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.083 [2024-11-15 10:36:58.386211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.083 [2024-11-15 10:36:58.386224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.083 [2024-11-15 10:36:58.386239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:1624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.083 [2024-11-15 10:36:58.386253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.083 [2024-11-15 10:36:58.386268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.083 [2024-11-15 10:36:58.386283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.083 [2024-11-15 10:36:58.386298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.083 [2024-11-15 10:36:58.386322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.083 [2024-11-15 10:36:58.386339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.083 [2024-11-15 10:36:58.386353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.083 [2024-11-15 10:36:58.386369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.083 [2024-11-15 10:36:58.386382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.083 [2024-11-15 10:36:58.386398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:1088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.083 [2024-11-15 10:36:58.386412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.083 [2024-11-15 10:36:58.386427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.083 [2024-11-15 10:36:58.386442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.083 [2024-11-15 10:36:58.386457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:1104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.083 [2024-11-15 10:36:58.386470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.083 [2024-11-15 10:36:58.386486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:1112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.083 [2024-11-15 10:36:58.386499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.083 [2024-11-15 10:36:58.386515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:1120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.083 [2024-11-15 10:36:58.386528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.083 [2024-11-15 10:36:58.386543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:1128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.083 [2024-11-15 10:36:58.386557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.083 [2024-11-15 10:36:58.386572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:1136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.083 [2024-11-15 10:36:58.386585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.083 [2024-11-15 10:36:58.386601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.084 [2024-11-15 10:36:58.386614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.084 [2024-11-15 10:36:58.386629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.084 [2024-11-15 10:36:58.386643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.084 [2024-11-15 10:36:58.386658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:1672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.084 [2024-11-15 10:36:58.386672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.084 [2024-11-15 10:36:58.386693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.084 [2024-11-15 10:36:58.386708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.084 [2024-11-15 10:36:58.386723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.084 [2024-11-15 10:36:58.386737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.084 [2024-11-15 10:36:58.386752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:1696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.084 [2024-11-15 10:36:58.386767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.084 [2024-11-15 10:36:58.386782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.084 [2024-11-15 10:36:58.386796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.084 [2024-11-15 10:36:58.386811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.084 [2024-11-15 10:36:58.386826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.084 [2024-11-15 10:36:58.386841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:1144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.084 [2024-11-15 10:36:58.386855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.084 [2024-11-15 10:36:58.386870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:1152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.084 [2024-11-15 10:36:58.386884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.084 [2024-11-15 10:36:58.386899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.084 [2024-11-15 10:36:58.386913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.084 [2024-11-15 10:36:58.386929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:1168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.084 [2024-11-15 10:36:58.386942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.084 [2024-11-15 10:36:58.386958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.084 [2024-11-15 10:36:58.386971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.084 [2024-11-15 10:36:58.386987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:1184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.084 [2024-11-15 10:36:58.387000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.084 [2024-11-15 10:36:58.387016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:1192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.084 [2024-11-15 10:36:58.387029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.084 [2024-11-15 10:36:58.387044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.084 [2024-11-15 10:36:58.387081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.084 [2024-11-15 10:36:58.387099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:1208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.084 [2024-11-15 10:36:58.387113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.084 [2024-11-15 10:36:58.387128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:1216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.084 [2024-11-15 10:36:58.387142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.084 [2024-11-15 10:36:58.387157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:1224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.084 [2024-11-15 10:36:58.387171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.084 [2024-11-15 10:36:58.387186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:1232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.084 [2024-11-15 10:36:58.387200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.084 [2024-11-15 10:36:58.387215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:1240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.084 [2024-11-15 10:36:58.387229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.084 [2024-11-15 10:36:58.387244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:1248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.084 [2024-11-15 10:36:58.387259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.084 [2024-11-15 10:36:58.387275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.084 [2024-11-15 10:36:58.387289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.084 [2024-11-15 10:36:58.387305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.084 [2024-11-15 10:36:58.387319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.084 [2024-11-15 10:36:58.387345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:1272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.084 [2024-11-15 10:36:58.387361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.084 [2024-11-15 10:36:58.387381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:1280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.084 [2024-11-15 10:36:58.387395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.084 [2024-11-15 10:36:58.387411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.084 [2024-11-15 10:36:58.387424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.084 [2024-11-15 10:36:58.387439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:1296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.084 [2024-11-15 10:36:58.387463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.084 [2024-11-15 10:36:58.387478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.084 [2024-11-15 10:36:58.387499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.084 [2024-11-15 10:36:58.387515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.084 [2024-11-15 10:36:58.387529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.084 [2024-11-15 10:36:58.387545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:1320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.084 [2024-11-15 10:36:58.387559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.084 [2024-11-15 10:36:58.387574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:1328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.084 [2024-11-15 10:36:58.387588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.084 [2024-11-15 10:36:58.387603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.084 [2024-11-15 10:36:58.387616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.084 [2024-11-15 10:36:58.387631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.084 [2024-11-15 10:36:58.387645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.085 [2024-11-15 10:36:58.387660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.085 [2024-11-15 10:36:58.387674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.085 [2024-11-15 10:36:58.387689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.085 [2024-11-15 10:36:58.387703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.085 [2024-11-15 10:36:58.387718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.085 [2024-11-15 10:36:58.387731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.085 [2024-11-15 10:36:58.387746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:1760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.085 [2024-11-15 10:36:58.387760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.085 [2024-11-15 10:36:58.387776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:1768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.085 [2024-11-15 10:36:58.387790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.085 [2024-11-15 10:36:58.387805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.085 [2024-11-15 10:36:58.387818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.085 [2024-11-15 10:36:58.387834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.085 [2024-11-15 10:36:58.387847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.085 [2024-11-15 10:36:58.387870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.085 [2024-11-15 10:36:58.387885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.085 [2024-11-15 10:36:58.387900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.085 [2024-11-15 10:36:58.387914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.085 [2024-11-15 10:36:58.387929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.085 [2024-11-15 10:36:58.387943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.085 [2024-11-15 10:36:58.387958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.085 [2024-11-15 10:36:58.387973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.085 [2024-11-15 10:36:58.387988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:1824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.085 [2024-11-15 10:36:58.388002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.085 [2024-11-15 10:36:58.388017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:1832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.085 [2024-11-15 10:36:58.388031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.085 [2024-11-15 10:36:58.388046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.085 [2024-11-15 10:36:58.388081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.085 [2024-11-15 10:36:58.388097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:1336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.085 [2024-11-15 10:36:58.388111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.085 [2024-11-15 10:36:58.388126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:1344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.085 [2024-11-15 10:36:58.388141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.085 [2024-11-15 10:36:58.388156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:1352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.085 [2024-11-15 10:36:58.388170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.085 [2024-11-15 10:36:58.388186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.085 [2024-11-15 10:36:58.388200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.085 [2024-11-15 10:36:58.388216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.085 [2024-11-15 10:36:58.388229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.085 [2024-11-15 10:36:58.388245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:1376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.085 [2024-11-15 10:36:58.388269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.085 [2024-11-15 10:36:58.388286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.085 [2024-11-15 10:36:58.388300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.085 [2024-11-15 10:36:58.388315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:1392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.085 [2024-11-15 10:36:58.388329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.085 [2024-11-15 10:36:58.388344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:1848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.085 [2024-11-15 10:36:58.388358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.085 [2024-11-15 10:36:58.388373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.085 [2024-11-15 10:36:58.388387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.085 [2024-11-15 10:36:58.388403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.085 [2024-11-15 10:36:58.388416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.085 [2024-11-15 10:36:58.388431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.085 [2024-11-15 10:36:58.388445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.085 [2024-11-15 10:36:58.388461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.085 [2024-11-15 10:36:58.388475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.085 [2024-11-15 10:36:58.388490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.085 [2024-11-15 10:36:58.388504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.085 [2024-11-15 10:36:58.388519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.085 [2024-11-15 10:36:58.388533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.085 [2024-11-15 10:36:58.388548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.085 [2024-11-15 10:36:58.388562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.085 [2024-11-15 10:36:58.388577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:1400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.085 [2024-11-15 10:36:58.388590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.085 [2024-11-15 10:36:58.388605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:1408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.085 [2024-11-15 10:36:58.388619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.085 [2024-11-15 10:36:58.388634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:1416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.085 [2024-11-15 10:36:58.388653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.085 [2024-11-15 10:36:58.388670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.086 [2024-11-15 10:36:58.388684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.086 [2024-11-15 10:36:58.388700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.086 [2024-11-15 10:36:58.388714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.086 [2024-11-15 10:36:58.388729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.086 [2024-11-15 10:36:58.388743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.086 [2024-11-15 10:36:58.388758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.086 [2024-11-15 10:36:58.388772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.086 [2024-11-15 10:36:58.388786] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac1290 is same with the state(6) to be set 00:19:18.086 [2024-11-15 10:36:58.388802] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:18.086 [2024-11-15 10:36:58.388813] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:18.086 [2024-11-15 10:36:58.388824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1456 len:8 PRP1 0x0 PRP2 0x0 00:19:18.086 [2024-11-15 10:36:58.388837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.086 [2024-11-15 10:36:58.388851] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:18.086 [2024-11-15 10:36:58.388861] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:18.086 [2024-11-15 10:36:58.388872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1912 len:8 PRP1 0x0 PRP2 0x0 00:19:18.086 [2024-11-15 10:36:58.388885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.086 [2024-11-15 10:36:58.388898] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:18.086 [2024-11-15 10:36:58.388908] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:18.086 [2024-11-15 10:36:58.388918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:8 PRP1 0x0 PRP2 0x0 00:19:18.086 [2024-11-15 10:36:58.388931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.086 [2024-11-15 10:36:58.388944] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:18.086 [2024-11-15 10:36:58.388954] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:18.086 [2024-11-15 10:36:58.388964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1928 len:8 PRP1 0x0 PRP2 0x0 00:19:18.086 [2024-11-15 10:36:58.388977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.086 [2024-11-15 10:36:58.388990] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:18.086 [2024-11-15 10:36:58.389000] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:18.086 [2024-11-15 10:36:58.389010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1936 len:8 PRP1 0x0 PRP2 0x0 00:19:18.086 [2024-11-15 10:36:58.389029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.086 [2024-11-15 10:36:58.389043] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:18.086 [2024-11-15 10:36:58.389065] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:18.086 [2024-11-15 10:36:58.389077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1944 len:8 PRP1 0x0 PRP2 0x0 00:19:18.086 [2024-11-15 10:36:58.389099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.086 [2024-11-15 10:36:58.389114] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:18.086 [2024-11-15 10:36:58.389125] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:18.086 [2024-11-15 10:36:58.389135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:8 PRP1 0x0 PRP2 0x0 00:19:18.086 [2024-11-15 10:36:58.389149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.086 [2024-11-15 10:36:58.389170] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:18.086 [2024-11-15 10:36:58.389179] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:18.086 [2024-11-15 10:36:58.389189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1960 len:8 PRP1 0x0 PRP2 0x0 00:19:18.086 [2024-11-15 10:36:58.389202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.086 [2024-11-15 10:36:58.389215] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:18.086 [2024-11-15 10:36:58.389225] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:18.086 [2024-11-15 10:36:58.389235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1968 len:8 PRP1 0x0 PRP2 0x0 00:19:18.086 [2024-11-15 10:36:58.389248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.086 [2024-11-15 10:36:58.389261] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:18.086 [2024-11-15 10:36:58.389277] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:18.086 [2024-11-15 10:36:58.389288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1976 len:8 PRP1 0x0 PRP2 0x0 00:19:18.086 [2024-11-15 10:36:58.389302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.086 [2024-11-15 10:36:58.389315] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:18.086 [2024-11-15 10:36:58.389325] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:18.086 [2024-11-15 10:36:58.389335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:8 PRP1 0x0 PRP2 0x0 00:19:18.086 [2024-11-15 10:36:58.389348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.086 [2024-11-15 10:36:58.389361] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:18.086 [2024-11-15 10:36:58.389371] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:18.086 [2024-11-15 10:36:58.389381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1992 len:8 PRP1 0x0 PRP2 0x0 00:19:18.086 [2024-11-15 10:36:58.389394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.086 [2024-11-15 10:36:58.389407] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:18.086 [2024-11-15 10:36:58.389423] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:18.086 [2024-11-15 10:36:58.389435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2000 len:8 PRP1 0x0 PRP2 0x0 00:19:18.086 [2024-11-15 10:36:58.389448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.086 [2024-11-15 10:36:58.389461] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:18.086 [2024-11-15 10:36:58.389471] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:18.086 [2024-11-15 10:36:58.389481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2008 len:8 PRP1 0x0 PRP2 0x0 00:19:18.086 [2024-11-15 10:36:58.389499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.086 [2024-11-15 10:36:58.389513] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:18.086 [2024-11-15 10:36:58.389523] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:18.086 [2024-11-15 10:36:58.389533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:8 PRP1 0x0 PRP2 0x0 00:19:18.086 [2024-11-15 10:36:58.389546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.087 [2024-11-15 10:36:58.389559] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:18.087 [2024-11-15 10:36:58.389569] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:18.087 [2024-11-15 10:36:58.389579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2024 len:8 PRP1 0x0 PRP2 0x0 00:19:18.087 [2024-11-15 10:36:58.389592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.087 [2024-11-15 10:36:58.389605] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:18.087 [2024-11-15 10:36:58.389615] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:18.087 [2024-11-15 10:36:58.389625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2032 len:8 PRP1 0x0 PRP2 0x0 00:19:18.087 [2024-11-15 10:36:58.389638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.087 [2024-11-15 10:36:58.390870] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:19:18.087 [2024-11-15 10:36:58.390951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.087 [2024-11-15 10:36:58.390975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.087 [2024-11-15 10:36:58.391006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a321d0 (9): Bad file descriptor 00:19:18.087 [2024-11-15 10:36:58.391448] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:18.087 [2024-11-15 10:36:58.391481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a321d0 with addr=10.0.0.3, port=4421 00:19:18.087 [2024-11-15 10:36:58.391498] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a321d0 is same with the state(6) to be set 00:19:18.087 [2024-11-15 10:36:58.391563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a321d0 (9): Bad file descriptor 00:19:18.087 [2024-11-15 10:36:58.391600] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:19:18.087 [2024-11-15 10:36:58.391617] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:19:18.087 [2024-11-15 10:36:58.391632] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:19:18.087 [2024-11-15 10:36:58.391658] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:19:18.087 [2024-11-15 10:36:58.391675] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:19:18.087 6432.43 IOPS, 25.13 MiB/s [2024-11-15T10:37:18.940Z] 6486.32 IOPS, 25.34 MiB/s [2024-11-15T10:37:18.940Z] 6544.82 IOPS, 25.57 MiB/s [2024-11-15T10:37:18.940Z] 6601.80 IOPS, 25.79 MiB/s [2024-11-15T10:37:18.940Z] 6656.39 IOPS, 26.00 MiB/s [2024-11-15T10:37:18.940Z] 6711.24 IOPS, 26.22 MiB/s [2024-11-15T10:37:18.940Z] 6753.67 IOPS, 26.38 MiB/s [2024-11-15T10:37:18.940Z] 6788.73 IOPS, 26.52 MiB/s [2024-11-15T10:37:18.940Z] 6827.73 IOPS, 26.67 MiB/s [2024-11-15T10:37:18.940Z] 6864.87 IOPS, 26.82 MiB/s [2024-11-15T10:37:18.940Z] [2024-11-15 10:37:08.451276] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:19:18.087 6903.64 IOPS, 26.97 MiB/s [2024-11-15T10:37:18.940Z] 6941.65 IOPS, 27.12 MiB/s [2024-11-15T10:37:18.940Z] 6979.73 IOPS, 27.26 MiB/s [2024-11-15T10:37:18.940Z] 7016.58 IOPS, 27.41 MiB/s [2024-11-15T10:37:18.940Z] 7046.25 IOPS, 27.52 MiB/s [2024-11-15T10:37:18.940Z] 7077.67 IOPS, 27.65 MiB/s [2024-11-15T10:37:18.940Z] 7108.36 IOPS, 27.77 MiB/s [2024-11-15T10:37:18.940Z] 7136.83 IOPS, 27.88 MiB/s [2024-11-15T10:37:18.940Z] 7166.53 IOPS, 27.99 MiB/s [2024-11-15T10:37:18.940Z] 7195.09 IOPS, 28.11 MiB/s [2024-11-15T10:37:18.940Z] Received shutdown signal, test time was about 56.309479 seconds 00:19:18.087 00:19:18.087 Latency(us) 00:19:18.087 [2024-11-15T10:37:18.940Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:18.087 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:18.087 Verification LBA range: start 0x0 length 0x4000 00:19:18.087 Nvme0n1 : 56.31 7202.14 28.13 0.00 0.00 17739.23 173.15 7046430.72 00:19:18.087 [2024-11-15T10:37:18.940Z] =================================================================================================================== 00:19:18.087 [2024-11-15T10:37:18.940Z] Total : 7202.14 28.13 0.00 0.00 17739.23 173.15 7046430.72 00:19:18.087 10:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:18.346 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:19:18.346 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:18.346 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:19:18.346 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:18.346 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:19:18.346 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:18.346 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:19:18.346 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:18.346 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:18.346 rmmod nvme_tcp 00:19:18.346 rmmod nvme_fabrics 00:19:18.659 rmmod nvme_keyring 00:19:18.659 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:18.659 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:19:18.659 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:19:18.659 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 80930 ']' 00:19:18.659 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 80930 00:19:18.659 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@952 -- # '[' -z 80930 ']' 00:19:18.659 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # kill -0 80930 00:19:18.659 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@957 -- # uname 00:19:18.659 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:18.659 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80930 00:19:18.659 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:18.659 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:18.659 killing process with pid 80930 00:19:18.659 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80930' 00:19:18.659 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@971 -- # kill 80930 00:19:18.659 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@976 -- # wait 80930 00:19:18.659 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:18.659 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:18.659 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:18.659 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:19:18.659 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:19:18.659 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:18.659 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:19:18.659 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:18.659 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:18.659 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:18.659 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:18.918 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:18.918 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:18.918 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:18.918 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:18.918 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:18.918 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:18.918 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:18.918 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:18.918 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:18.918 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:18.918 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:18.918 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:18.918 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:18.918 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:18.918 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:18.918 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:19:18.918 00:19:18.918 real 1m2.203s 00:19:18.918 user 2m52.999s 00:19:18.918 sys 0m18.795s 00:19:18.918 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:18.918 ************************************ 00:19:18.918 END TEST nvmf_host_multipath 00:19:18.918 ************************************ 00:19:18.918 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:18.918 10:37:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:19:18.918 10:37:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:18.918 10:37:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:18.918 10:37:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:19.177 ************************************ 00:19:19.177 START TEST nvmf_timeout 00:19:19.177 ************************************ 00:19:19.177 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:19:19.177 * Looking for test storage... 00:19:19.177 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:19.177 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:19.177 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1691 -- # lcov --version 00:19:19.177 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:19.177 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:19.177 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:19.177 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:19.177 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:19.177 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:19:19.177 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:19:19.177 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:19:19.177 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:19:19.177 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:19:19.177 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:19:19.177 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:19:19.177 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:19.177 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:19:19.177 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:19:19.177 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:19.177 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:19.177 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:19:19.177 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:19:19.177 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:19.177 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:19:19.177 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:19:19.177 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:19:19.177 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:19:19.177 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:19.177 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:19:19.177 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:19:19.177 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:19.177 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:19.177 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:19:19.177 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:19.177 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:19.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:19.177 --rc genhtml_branch_coverage=1 00:19:19.177 --rc genhtml_function_coverage=1 00:19:19.177 --rc genhtml_legend=1 00:19:19.177 --rc geninfo_all_blocks=1 00:19:19.177 --rc geninfo_unexecuted_blocks=1 00:19:19.177 00:19:19.177 ' 00:19:19.177 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:19.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:19.177 --rc genhtml_branch_coverage=1 00:19:19.177 --rc genhtml_function_coverage=1 00:19:19.177 --rc genhtml_legend=1 00:19:19.177 --rc geninfo_all_blocks=1 00:19:19.177 --rc geninfo_unexecuted_blocks=1 00:19:19.177 00:19:19.177 ' 00:19:19.177 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:19.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:19.177 --rc genhtml_branch_coverage=1 00:19:19.177 --rc genhtml_function_coverage=1 00:19:19.177 --rc genhtml_legend=1 00:19:19.177 --rc geninfo_all_blocks=1 00:19:19.177 --rc geninfo_unexecuted_blocks=1 00:19:19.177 00:19:19.177 ' 00:19:19.177 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:19.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:19.177 --rc genhtml_branch_coverage=1 00:19:19.177 --rc genhtml_function_coverage=1 00:19:19.177 --rc genhtml_legend=1 00:19:19.177 --rc geninfo_all_blocks=1 00:19:19.177 --rc geninfo_unexecuted_blocks=1 00:19:19.177 00:19:19.177 ' 00:19:19.177 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:19.177 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:19:19.177 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:19.177 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:19.177 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:19.177 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:19.177 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:19.177 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:19.177 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:19.177 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:19.177 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:19.177 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:19.177 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:19:19.177 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:19:19.177 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:19.177 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:19.177 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:19.177 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:19.177 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:19.177 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:19:19.177 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:19.177 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:19.177 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:19.178 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.178 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.178 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.178 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:19:19.178 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.178 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:19:19.178 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:19.178 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:19.178 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:19.178 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:19.178 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:19.178 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:19.178 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:19.178 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:19.178 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:19.178 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:19.178 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:19.178 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:19.178 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:19.178 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:19:19.178 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:19.178 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:19:19.178 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:19.178 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:19.178 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:19.178 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:19.178 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:19.178 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:19.178 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:19.178 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:19.178 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:19.178 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:19.178 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:19.178 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:19.178 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:19.178 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:19.178 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:19.178 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:19.178 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:19.178 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:19.178 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:19.178 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:19.178 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:19.178 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:19.178 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:19.178 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:19.178 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:19.178 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:19.178 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:19.178 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:19.178 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:19.178 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:19.178 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:19.178 Cannot find device "nvmf_init_br" 00:19:19.178 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:19:19.178 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:19.437 Cannot find device "nvmf_init_br2" 00:19:19.437 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:19:19.437 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:19.437 Cannot find device "nvmf_tgt_br" 00:19:19.437 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:19:19.437 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:19.437 Cannot find device "nvmf_tgt_br2" 00:19:19.437 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:19:19.437 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:19.437 Cannot find device "nvmf_init_br" 00:19:19.437 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:19:19.437 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:19.437 Cannot find device "nvmf_init_br2" 00:19:19.437 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:19:19.437 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:19.437 Cannot find device "nvmf_tgt_br" 00:19:19.437 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:19:19.437 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:19.437 Cannot find device "nvmf_tgt_br2" 00:19:19.437 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:19:19.437 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:19.437 Cannot find device "nvmf_br" 00:19:19.437 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:19:19.437 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:19.437 Cannot find device "nvmf_init_if" 00:19:19.437 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:19:19.437 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:19.437 Cannot find device "nvmf_init_if2" 00:19:19.437 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:19:19.437 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:19.437 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:19.437 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:19:19.437 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:19.437 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:19.437 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:19:19.437 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:19.437 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:19.437 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:19.437 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:19.437 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:19.437 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:19.437 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:19.437 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:19.437 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:19.437 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:19.437 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:19.696 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:19.696 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:19.696 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:19.696 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:19.696 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:19.696 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:19.696 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:19.696 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:19.696 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:19.696 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:19.696 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:19.696 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:19.696 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:19.696 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:19.696 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:19.696 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:19.696 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:19.696 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:19.696 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:19.696 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:19.696 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:19.696 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:19.696 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:19.696 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.094 ms 00:19:19.696 00:19:19.696 --- 10.0.0.3 ping statistics --- 00:19:19.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:19.696 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:19:19.696 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:19.696 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:19.696 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.065 ms 00:19:19.696 00:19:19.696 --- 10.0.0.4 ping statistics --- 00:19:19.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:19.696 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:19:19.696 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:19.696 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:19.696 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:19:19.696 00:19:19.696 --- 10.0.0.1 ping statistics --- 00:19:19.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:19.696 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:19:19.696 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:19.696 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:19.696 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:19:19.696 00:19:19.696 --- 10.0.0.2 ping statistics --- 00:19:19.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:19.696 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:19:19.696 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:19.696 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:19:19.696 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:19.696 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:19.696 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:19.696 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:19.696 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:19.696 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:19.696 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:19.696 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:19:19.696 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:19.696 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:19.696 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:19.696 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=82148 00:19:19.696 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:19:19.696 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 82148 00:19:19.696 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # '[' -z 82148 ']' 00:19:19.696 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:19.696 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:19.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:19.697 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:19.697 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:19.697 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:19.955 [2024-11-15 10:37:20.560082] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:19:19.955 [2024-11-15 10:37:20.560822] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:19.955 [2024-11-15 10:37:20.708591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:19.955 [2024-11-15 10:37:20.777317] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:19.955 [2024-11-15 10:37:20.777389] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:19.955 [2024-11-15 10:37:20.777403] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:19.955 [2024-11-15 10:37:20.777414] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:19.955 [2024-11-15 10:37:20.777432] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:19.955 [2024-11-15 10:37:20.778712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:19.955 [2024-11-15 10:37:20.778726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:20.213 [2024-11-15 10:37:20.837669] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:20.213 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:20.213 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@866 -- # return 0 00:19:20.213 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:20.213 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:20.213 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:20.213 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:20.213 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:20.213 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:20.471 [2024-11-15 10:37:21.187006] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:20.471 10:37:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:20.729 Malloc0 00:19:20.729 10:37:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:20.987 10:37:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:21.554 10:37:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:21.554 [2024-11-15 10:37:22.402406] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:21.812 10:37:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=82190 00:19:21.812 10:37:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:19:21.812 10:37:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 82190 /var/tmp/bdevperf.sock 00:19:21.812 10:37:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # '[' -z 82190 ']' 00:19:21.812 10:37:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:21.812 10:37:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:21.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:21.812 10:37:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:21.812 10:37:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:21.812 10:37:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:21.812 [2024-11-15 10:37:22.473664] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:19:21.812 [2024-11-15 10:37:22.473760] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82190 ] 00:19:21.812 [2024-11-15 10:37:22.657849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.070 [2024-11-15 10:37:22.729555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:22.070 [2024-11-15 10:37:22.790842] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:23.004 10:37:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:23.004 10:37:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@866 -- # return 0 00:19:23.004 10:37:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:23.004 10:37:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:19:23.571 NVMe0n1 00:19:23.571 10:37:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=82219 00:19:23.571 10:37:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:23.571 10:37:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:19:23.571 Running I/O for 10 seconds... 00:19:24.505 10:37:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:24.767 6805.00 IOPS, 26.58 MiB/s [2024-11-15T10:37:25.620Z] [2024-11-15 10:37:25.449770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:63384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.767 [2024-11-15 10:37:25.449841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.767 [2024-11-15 10:37:25.449867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:63392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.767 [2024-11-15 10:37:25.449879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.767 [2024-11-15 10:37:25.449892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:63400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.767 [2024-11-15 10:37:25.449902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.767 [2024-11-15 10:37:25.449915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:63408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.767 [2024-11-15 10:37:25.449924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.767 [2024-11-15 10:37:25.449936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:63416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.767 [2024-11-15 10:37:25.449946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.767 [2024-11-15 10:37:25.449958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:63424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.767 [2024-11-15 10:37:25.449967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.767 [2024-11-15 10:37:25.449979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:63432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.767 [2024-11-15 10:37:25.449988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.767 [2024-11-15 10:37:25.450000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:63440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.767 [2024-11-15 10:37:25.450009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.767 [2024-11-15 10:37:25.450021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:63448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.767 [2024-11-15 10:37:25.450032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.767 [2024-11-15 10:37:25.450043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:63456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.767 [2024-11-15 10:37:25.450068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.767 [2024-11-15 10:37:25.450081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:63464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.767 [2024-11-15 10:37:25.450091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.767 [2024-11-15 10:37:25.450103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:63472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.767 [2024-11-15 10:37:25.450112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.767 [2024-11-15 10:37:25.450124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:63480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.767 [2024-11-15 10:37:25.450134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.767 [2024-11-15 10:37:25.450146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:63488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.767 [2024-11-15 10:37:25.450156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.767 [2024-11-15 10:37:25.450167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:63496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.767 [2024-11-15 10:37:25.450177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.767 [2024-11-15 10:37:25.450189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:63504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.767 [2024-11-15 10:37:25.450199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.767 [2024-11-15 10:37:25.450211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:63512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.767 [2024-11-15 10:37:25.450221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.767 [2024-11-15 10:37:25.450237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:63520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.767 [2024-11-15 10:37:25.450247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.767 [2024-11-15 10:37:25.450259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:63528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.767 [2024-11-15 10:37:25.450269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.767 [2024-11-15 10:37:25.450281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:63536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.767 [2024-11-15 10:37:25.450291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.767 [2024-11-15 10:37:25.450303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:63544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.767 [2024-11-15 10:37:25.450313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.767 [2024-11-15 10:37:25.450324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:63552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.767 [2024-11-15 10:37:25.450335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.767 [2024-11-15 10:37:25.450347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:63560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.767 [2024-11-15 10:37:25.450369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.767 [2024-11-15 10:37:25.450381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:63568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.767 [2024-11-15 10:37:25.450390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.768 [2024-11-15 10:37:25.450402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:63576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.768 [2024-11-15 10:37:25.450412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.768 [2024-11-15 10:37:25.450425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.768 [2024-11-15 10:37:25.450435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.768 [2024-11-15 10:37:25.450446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:63592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.768 [2024-11-15 10:37:25.450456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.768 [2024-11-15 10:37:25.450469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:63600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.768 [2024-11-15 10:37:25.450479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.768 [2024-11-15 10:37:25.450490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:63608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.768 [2024-11-15 10:37:25.450500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.768 [2024-11-15 10:37:25.450511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:63616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.768 [2024-11-15 10:37:25.450521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.768 [2024-11-15 10:37:25.450532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:63624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.768 [2024-11-15 10:37:25.450542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.768 [2024-11-15 10:37:25.450553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:62632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.768 [2024-11-15 10:37:25.450562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.768 [2024-11-15 10:37:25.450575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:62640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.768 [2024-11-15 10:37:25.450584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.768 [2024-11-15 10:37:25.450597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:62648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.768 [2024-11-15 10:37:25.450607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.768 [2024-11-15 10:37:25.450618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:62656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.768 [2024-11-15 10:37:25.450628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.768 [2024-11-15 10:37:25.450640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:62664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.768 [2024-11-15 10:37:25.450649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.768 [2024-11-15 10:37:25.450662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:62672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.768 [2024-11-15 10:37:25.450671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.768 [2024-11-15 10:37:25.450683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:62680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.768 [2024-11-15 10:37:25.450693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.768 [2024-11-15 10:37:25.450704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:62688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.768 [2024-11-15 10:37:25.450714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.768 [2024-11-15 10:37:25.450725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:62696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.768 [2024-11-15 10:37:25.450735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.768 [2024-11-15 10:37:25.450747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:62704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.768 [2024-11-15 10:37:25.450757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.768 [2024-11-15 10:37:25.450768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:62712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.768 [2024-11-15 10:37:25.450777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.768 [2024-11-15 10:37:25.450789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:62720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.768 [2024-11-15 10:37:25.450799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.768 [2024-11-15 10:37:25.450810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:62728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.768 [2024-11-15 10:37:25.450819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.768 [2024-11-15 10:37:25.450831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:62736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.768 [2024-11-15 10:37:25.450840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.768 [2024-11-15 10:37:25.450851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:62744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.768 [2024-11-15 10:37:25.450862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.768 [2024-11-15 10:37:25.450873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:63632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.768 [2024-11-15 10:37:25.450883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.768 [2024-11-15 10:37:25.450894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:63640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.768 [2024-11-15 10:37:25.450903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.768 [2024-11-15 10:37:25.450916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:62752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.768 [2024-11-15 10:37:25.450926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.768 [2024-11-15 10:37:25.450939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:62760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.768 [2024-11-15 10:37:25.450949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.768 [2024-11-15 10:37:25.450961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:62768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.768 [2024-11-15 10:37:25.450971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.768 [2024-11-15 10:37:25.450982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:62776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.768 [2024-11-15 10:37:25.450992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.769 [2024-11-15 10:37:25.451003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:62784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.769 [2024-11-15 10:37:25.451013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.769 [2024-11-15 10:37:25.451024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:62792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.769 [2024-11-15 10:37:25.451034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.769 [2024-11-15 10:37:25.451045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:62800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.769 [2024-11-15 10:37:25.451078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.769 [2024-11-15 10:37:25.451091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:63648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.769 [2024-11-15 10:37:25.451101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.769 [2024-11-15 10:37:25.451113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:62808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.769 [2024-11-15 10:37:25.451123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.769 [2024-11-15 10:37:25.451135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:62816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.769 [2024-11-15 10:37:25.451144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.769 [2024-11-15 10:37:25.451155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:62824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.769 [2024-11-15 10:37:25.451165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.769 [2024-11-15 10:37:25.451177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:62832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.769 [2024-11-15 10:37:25.451187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.769 [2024-11-15 10:37:25.451199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:62840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.769 [2024-11-15 10:37:25.451208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.769 [2024-11-15 10:37:25.451220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:62848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.769 [2024-11-15 10:37:25.451229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.769 [2024-11-15 10:37:25.451240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:62856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.769 [2024-11-15 10:37:25.451250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.769 [2024-11-15 10:37:25.451261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:62864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.769 [2024-11-15 10:37:25.451271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.769 [2024-11-15 10:37:25.451282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:62872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.769 [2024-11-15 10:37:25.451292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.769 [2024-11-15 10:37:25.451304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:62880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.769 [2024-11-15 10:37:25.451313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.769 [2024-11-15 10:37:25.451337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:62888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.769 [2024-11-15 10:37:25.451347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.769 [2024-11-15 10:37:25.451359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:62896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.769 [2024-11-15 10:37:25.451368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.769 [2024-11-15 10:37:25.451380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:62904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.769 [2024-11-15 10:37:25.451390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.769 [2024-11-15 10:37:25.451401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:62912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.769 [2024-11-15 10:37:25.451411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.769 [2024-11-15 10:37:25.451422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:62920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.769 [2024-11-15 10:37:25.451437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.769 [2024-11-15 10:37:25.451449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:62928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.769 [2024-11-15 10:37:25.451459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.769 [2024-11-15 10:37:25.451471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:62936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.769 [2024-11-15 10:37:25.451480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.769 [2024-11-15 10:37:25.451491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:62944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.769 [2024-11-15 10:37:25.451501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.769 [2024-11-15 10:37:25.451512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:62952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.769 [2024-11-15 10:37:25.451521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.769 [2024-11-15 10:37:25.451533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:62960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.769 [2024-11-15 10:37:25.451543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.769 [2024-11-15 10:37:25.451554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:62968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.769 [2024-11-15 10:37:25.451564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.769 [2024-11-15 10:37:25.451576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:62976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.769 [2024-11-15 10:37:25.451586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.769 [2024-11-15 10:37:25.451598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:62984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.769 [2024-11-15 10:37:25.451608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.769 [2024-11-15 10:37:25.451619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:62992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.769 [2024-11-15 10:37:25.451629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.770 [2024-11-15 10:37:25.451640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:63000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.770 [2024-11-15 10:37:25.451650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.770 [2024-11-15 10:37:25.451661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:63008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.770 [2024-11-15 10:37:25.451670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.770 [2024-11-15 10:37:25.451681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:63016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.770 [2024-11-15 10:37:25.451691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.770 [2024-11-15 10:37:25.451703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:63024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.770 [2024-11-15 10:37:25.451712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.770 [2024-11-15 10:37:25.451723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:63032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.770 [2024-11-15 10:37:25.451733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.770 [2024-11-15 10:37:25.451744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:63040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.770 [2024-11-15 10:37:25.451754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.770 [2024-11-15 10:37:25.451766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:63048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.770 [2024-11-15 10:37:25.451791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.770 [2024-11-15 10:37:25.451803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:63056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.770 [2024-11-15 10:37:25.451813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.770 [2024-11-15 10:37:25.451824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:63064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.770 [2024-11-15 10:37:25.451834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.770 [2024-11-15 10:37:25.451845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:63072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.770 [2024-11-15 10:37:25.451854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.770 [2024-11-15 10:37:25.451866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:63080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.770 [2024-11-15 10:37:25.451875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.770 [2024-11-15 10:37:25.451887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:63088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.770 [2024-11-15 10:37:25.451896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.770 [2024-11-15 10:37:25.451907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:63096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.770 [2024-11-15 10:37:25.451917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.770 [2024-11-15 10:37:25.451935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:63104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.770 [2024-11-15 10:37:25.451945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.770 [2024-11-15 10:37:25.451957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:63112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.770 [2024-11-15 10:37:25.451966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.770 [2024-11-15 10:37:25.451978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:63120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.770 [2024-11-15 10:37:25.451987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.770 [2024-11-15 10:37:25.451999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:63128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.770 [2024-11-15 10:37:25.452008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.770 [2024-11-15 10:37:25.452019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:63136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.770 [2024-11-15 10:37:25.452029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.770 [2024-11-15 10:37:25.452040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:63144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.770 [2024-11-15 10:37:25.452058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.770 [2024-11-15 10:37:25.452072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:63152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.770 [2024-11-15 10:37:25.452081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.770 [2024-11-15 10:37:25.452093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:63160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.770 [2024-11-15 10:37:25.452102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.770 [2024-11-15 10:37:25.452114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:63168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.770 [2024-11-15 10:37:25.452124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.770 [2024-11-15 10:37:25.452135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:63176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.770 [2024-11-15 10:37:25.452150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.770 [2024-11-15 10:37:25.452162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:63184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.770 [2024-11-15 10:37:25.452171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.770 [2024-11-15 10:37:25.452183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:63192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.770 [2024-11-15 10:37:25.452192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.770 [2024-11-15 10:37:25.452204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:63200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.770 [2024-11-15 10:37:25.452214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.770 [2024-11-15 10:37:25.452225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:63208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.770 [2024-11-15 10:37:25.452235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.770 [2024-11-15 10:37:25.452246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:63216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.770 [2024-11-15 10:37:25.452257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.770 [2024-11-15 10:37:25.452268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:63224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.770 [2024-11-15 10:37:25.452278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.770 [2024-11-15 10:37:25.452295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:63232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.770 [2024-11-15 10:37:25.452304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.771 [2024-11-15 10:37:25.452316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:63240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.771 [2024-11-15 10:37:25.452325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.771 [2024-11-15 10:37:25.452337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:63248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.771 [2024-11-15 10:37:25.452346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.771 [2024-11-15 10:37:25.452357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:63256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.771 [2024-11-15 10:37:25.452367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.771 [2024-11-15 10:37:25.452378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:63264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.771 [2024-11-15 10:37:25.452388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.771 [2024-11-15 10:37:25.452399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:63272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.771 [2024-11-15 10:37:25.452409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.771 [2024-11-15 10:37:25.452420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:63280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.771 [2024-11-15 10:37:25.452429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.771 [2024-11-15 10:37:25.452441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:63288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.771 [2024-11-15 10:37:25.452451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.771 [2024-11-15 10:37:25.452462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:63296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.771 [2024-11-15 10:37:25.452471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.771 [2024-11-15 10:37:25.452482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:63304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.771 [2024-11-15 10:37:25.452496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.771 [2024-11-15 10:37:25.452508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:63312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.771 [2024-11-15 10:37:25.452518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.771 [2024-11-15 10:37:25.452529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:63320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.771 [2024-11-15 10:37:25.452539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.771 [2024-11-15 10:37:25.452550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:63328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.771 [2024-11-15 10:37:25.452560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.771 [2024-11-15 10:37:25.452571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:63336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.771 [2024-11-15 10:37:25.452580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.771 [2024-11-15 10:37:25.452592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:63344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.771 [2024-11-15 10:37:25.452602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.771 [2024-11-15 10:37:25.452613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:63352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.771 [2024-11-15 10:37:25.452622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.771 [2024-11-15 10:37:25.452639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:63360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.771 [2024-11-15 10:37:25.452648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.771 [2024-11-15 10:37:25.452660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:63368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.771 [2024-11-15 10:37:25.452669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.771 [2024-11-15 10:37:25.452680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf41f70 is same with the state(6) to be set 00:19:24.771 [2024-11-15 10:37:25.452693] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:24.771 [2024-11-15 10:37:25.452701] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:24.771 [2024-11-15 10:37:25.452709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63376 len:8 PRP1 0x0 PRP2 0x0 00:19:24.771 [2024-11-15 10:37:25.452718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.771 [2024-11-15 10:37:25.453063] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:24.771 [2024-11-15 10:37:25.453157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4e50 (9): Bad file descriptor 00:19:24.771 [2024-11-15 10:37:25.453272] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:24.771 [2024-11-15 10:37:25.453294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4e50 with addr=10.0.0.3, port=4420 00:19:24.771 [2024-11-15 10:37:25.453305] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4e50 is same with the state(6) to be set 00:19:24.771 [2024-11-15 10:37:25.453323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4e50 (9): Bad file descriptor 00:19:24.771 [2024-11-15 10:37:25.453340] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:19:24.771 [2024-11-15 10:37:25.453349] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:19:24.771 [2024-11-15 10:37:25.453361] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:19:24.771 [2024-11-15 10:37:25.453372] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:19:24.771 [2024-11-15 10:37:25.453389] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:24.771 10:37:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:19:26.642 3914.50 IOPS, 15.29 MiB/s [2024-11-15T10:37:27.495Z] 2609.67 IOPS, 10.19 MiB/s [2024-11-15T10:37:27.495Z] [2024-11-15 10:37:27.453646] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:26.642 [2024-11-15 10:37:27.453738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4e50 with addr=10.0.0.3, port=4420 00:19:26.642 [2024-11-15 10:37:27.453756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4e50 is same with the state(6) to be set 00:19:26.642 [2024-11-15 10:37:27.453784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4e50 (9): Bad file descriptor 00:19:26.642 [2024-11-15 10:37:27.453817] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:19:26.642 [2024-11-15 10:37:27.453830] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:19:26.642 [2024-11-15 10:37:27.453842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:19:26.642 [2024-11-15 10:37:27.453855] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:19:26.642 [2024-11-15 10:37:27.453867] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:26.642 10:37:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:19:26.642 10:37:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:26.642 10:37:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:27.208 10:37:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:19:27.209 10:37:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:19:27.209 10:37:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:19:27.209 10:37:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:27.472 10:37:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:19:27.472 10:37:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:19:28.857 1957.25 IOPS, 7.65 MiB/s [2024-11-15T10:37:29.710Z] 1565.80 IOPS, 6.12 MiB/s [2024-11-15T10:37:29.710Z] [2024-11-15 10:37:29.454130] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:28.857 [2024-11-15 10:37:29.454226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4e50 with addr=10.0.0.3, port=4420 00:19:28.857 [2024-11-15 10:37:29.454244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4e50 is same with the state(6) to be set 00:19:28.857 [2024-11-15 10:37:29.454270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4e50 (9): Bad file descriptor 00:19:28.857 [2024-11-15 10:37:29.454290] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:19:28.857 [2024-11-15 10:37:29.454300] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:19:28.857 [2024-11-15 10:37:29.454311] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:19:28.857 [2024-11-15 10:37:29.454323] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:19:28.857 [2024-11-15 10:37:29.454335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:30.765 1304.83 IOPS, 5.10 MiB/s [2024-11-15T10:37:31.618Z] 1118.43 IOPS, 4.37 MiB/s [2024-11-15T10:37:31.618Z] [2024-11-15 10:37:31.454511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:19:30.765 [2024-11-15 10:37:31.454570] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:19:30.765 [2024-11-15 10:37:31.454601] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:19:30.765 [2024-11-15 10:37:31.454612] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:19:30.765 [2024-11-15 10:37:31.454624] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:19:31.700 978.62 IOPS, 3.82 MiB/s 00:19:31.700 Latency(us) 00:19:31.700 [2024-11-15T10:37:32.553Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:31.700 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:31.700 Verification LBA range: start 0x0 length 0x4000 00:19:31.700 NVMe0n1 : 8.15 960.24 3.75 15.70 0.00 130954.69 4140.68 7015926.69 00:19:31.700 [2024-11-15T10:37:32.553Z] =================================================================================================================== 00:19:31.700 [2024-11-15T10:37:32.553Z] Total : 960.24 3.75 15.70 0.00 130954.69 4140.68 7015926.69 00:19:31.700 { 00:19:31.700 "results": [ 00:19:31.700 { 00:19:31.700 "job": "NVMe0n1", 00:19:31.700 "core_mask": "0x4", 00:19:31.700 "workload": "verify", 00:19:31.700 "status": "finished", 00:19:31.700 "verify_range": { 00:19:31.700 "start": 0, 00:19:31.700 "length": 16384 00:19:31.700 }, 00:19:31.700 "queue_depth": 128, 00:19:31.700 "io_size": 4096, 00:19:31.700 "runtime": 8.153176, 00:19:31.700 "iops": 960.2392981581656, 00:19:31.700 "mibps": 3.7509347584303345, 00:19:31.700 "io_failed": 128, 00:19:31.700 "io_timeout": 0, 00:19:31.700 "avg_latency_us": 130954.68930958446, 00:19:31.700 "min_latency_us": 4140.683636363637, 00:19:31.700 "max_latency_us": 7015926.69090909 00:19:31.700 } 00:19:31.700 ], 00:19:31.700 "core_count": 1 00:19:31.700 } 00:19:32.280 10:37:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:19:32.280 10:37:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:32.280 10:37:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:32.847 10:37:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:19:32.847 10:37:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:19:32.847 10:37:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:32.847 10:37:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:19:33.107 10:37:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:19:33.107 10:37:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 82219 00:19:33.107 10:37:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 82190 00:19:33.107 10:37:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # '[' -z 82190 ']' 00:19:33.107 10:37:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # kill -0 82190 00:19:33.107 10:37:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # uname 00:19:33.107 10:37:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:33.107 10:37:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 82190 00:19:33.107 killing process with pid 82190 00:19:33.107 Received shutdown signal, test time was about 9.466388 seconds 00:19:33.107 00:19:33.107 Latency(us) 00:19:33.107 [2024-11-15T10:37:33.960Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:33.107 [2024-11-15T10:37:33.960Z] =================================================================================================================== 00:19:33.107 [2024-11-15T10:37:33.960Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:33.107 10:37:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:33.107 10:37:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:33.107 10:37:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@970 -- # echo 'killing process with pid 82190' 00:19:33.107 10:37:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@971 -- # kill 82190 00:19:33.107 10:37:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@976 -- # wait 82190 00:19:33.365 10:37:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:33.624 [2024-11-15 10:37:34.234353] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:33.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:33.624 10:37:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=82342 00:19:33.624 10:37:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:19:33.624 10:37:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 82342 /var/tmp/bdevperf.sock 00:19:33.624 10:37:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # '[' -z 82342 ']' 00:19:33.624 10:37:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:33.624 10:37:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:33.624 10:37:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:33.624 10:37:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:33.624 10:37:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:33.624 [2024-11-15 10:37:34.303718] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:19:33.624 [2024-11-15 10:37:34.304023] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82342 ] 00:19:33.624 [2024-11-15 10:37:34.454426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.883 [2024-11-15 10:37:34.526270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:33.883 [2024-11-15 10:37:34.586046] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:33.883 10:37:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:33.883 10:37:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@866 -- # return 0 00:19:33.883 10:37:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:34.142 10:37:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:19:34.709 NVMe0n1 00:19:34.709 10:37:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=82358 00:19:34.709 10:37:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:34.709 10:37:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:19:34.709 Running I/O for 10 seconds... 00:19:35.647 10:37:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:35.908 6642.00 IOPS, 25.95 MiB/s [2024-11-15T10:37:36.762Z] [2024-11-15 10:37:36.555247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:63120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.909 [2024-11-15 10:37:36.555323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.909 [2024-11-15 10:37:36.555351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:63128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.909 [2024-11-15 10:37:36.555363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.909 [2024-11-15 10:37:36.555375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:63136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.909 [2024-11-15 10:37:36.555385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.909 [2024-11-15 10:37:36.555396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:63144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.909 [2024-11-15 10:37:36.555406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.909 [2024-11-15 10:37:36.555417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:62480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.909 [2024-11-15 10:37:36.555428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.909 [2024-11-15 10:37:36.555439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:62488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.909 [2024-11-15 10:37:36.555449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.909 [2024-11-15 10:37:36.555461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:62496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.909 [2024-11-15 10:37:36.555470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.909 [2024-11-15 10:37:36.555482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:62504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.909 [2024-11-15 10:37:36.555491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.909 [2024-11-15 10:37:36.555503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:62512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.909 [2024-11-15 10:37:36.555512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.909 [2024-11-15 10:37:36.555523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:62520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.909 [2024-11-15 10:37:36.555533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.909 [2024-11-15 10:37:36.555544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:62528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.909 [2024-11-15 10:37:36.555554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.909 [2024-11-15 10:37:36.555565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:62536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.909 [2024-11-15 10:37:36.555574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.909 [2024-11-15 10:37:36.555585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:62544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.909 [2024-11-15 10:37:36.555595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.909 [2024-11-15 10:37:36.555608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:62552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.909 [2024-11-15 10:37:36.555618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.909 [2024-11-15 10:37:36.555629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:62560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.909 [2024-11-15 10:37:36.555639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.909 [2024-11-15 10:37:36.555650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:62568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.909 [2024-11-15 10:37:36.555659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.909 [2024-11-15 10:37:36.555671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:62576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.909 [2024-11-15 10:37:36.555680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.909 [2024-11-15 10:37:36.555695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:62584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.909 [2024-11-15 10:37:36.555705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.909 [2024-11-15 10:37:36.555717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:62592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.909 [2024-11-15 10:37:36.555726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.909 [2024-11-15 10:37:36.555739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:62600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.909 [2024-11-15 10:37:36.555748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.909 [2024-11-15 10:37:36.555761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:63152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.909 [2024-11-15 10:37:36.555770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.909 [2024-11-15 10:37:36.555782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:63160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.909 [2024-11-15 10:37:36.555791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.909 [2024-11-15 10:37:36.555802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:63168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.909 [2024-11-15 10:37:36.555812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.909 [2024-11-15 10:37:36.555823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:63176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.909 [2024-11-15 10:37:36.555832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.909 [2024-11-15 10:37:36.555844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:63184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.909 [2024-11-15 10:37:36.555853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.909 [2024-11-15 10:37:36.555864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:63192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.909 [2024-11-15 10:37:36.555876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.909 [2024-11-15 10:37:36.555888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:63200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.909 [2024-11-15 10:37:36.555897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.909 [2024-11-15 10:37:36.555908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:63208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.909 [2024-11-15 10:37:36.555918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.909 [2024-11-15 10:37:36.555929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:63216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.909 [2024-11-15 10:37:36.555938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.909 [2024-11-15 10:37:36.555949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:63224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.910 [2024-11-15 10:37:36.555959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.910 [2024-11-15 10:37:36.555970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:63232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.910 [2024-11-15 10:37:36.555979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.910 [2024-11-15 10:37:36.555991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:63240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.910 [2024-11-15 10:37:36.556000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.910 [2024-11-15 10:37:36.556011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:63248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.910 [2024-11-15 10:37:36.556021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.910 [2024-11-15 10:37:36.556033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:63256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.910 [2024-11-15 10:37:36.556043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.910 [2024-11-15 10:37:36.556075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:62608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.910 [2024-11-15 10:37:36.556088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.910 [2024-11-15 10:37:36.556099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:62616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.910 [2024-11-15 10:37:36.556109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.910 [2024-11-15 10:37:36.556121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:62624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.910 [2024-11-15 10:37:36.556130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.910 [2024-11-15 10:37:36.556142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:62632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.910 [2024-11-15 10:37:36.556151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.910 [2024-11-15 10:37:36.556164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:62640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.910 [2024-11-15 10:37:36.556174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.910 [2024-11-15 10:37:36.556185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:62648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.910 [2024-11-15 10:37:36.556194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.910 [2024-11-15 10:37:36.556205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:62656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.910 [2024-11-15 10:37:36.556215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.910 [2024-11-15 10:37:36.556226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:62664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.910 [2024-11-15 10:37:36.556236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.910 [2024-11-15 10:37:36.556247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:63264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.910 [2024-11-15 10:37:36.556257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.910 [2024-11-15 10:37:36.556268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:63272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.910 [2024-11-15 10:37:36.556278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.910 [2024-11-15 10:37:36.556289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:63280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.910 [2024-11-15 10:37:36.556298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.910 [2024-11-15 10:37:36.556309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:63288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.910 [2024-11-15 10:37:36.556318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.910 [2024-11-15 10:37:36.556329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:63296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.910 [2024-11-15 10:37:36.556338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.910 [2024-11-15 10:37:36.556349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:63304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.910 [2024-11-15 10:37:36.556358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.910 [2024-11-15 10:37:36.556369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:63312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.910 [2024-11-15 10:37:36.556378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.910 [2024-11-15 10:37:36.556389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:63320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.910 [2024-11-15 10:37:36.556399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.910 [2024-11-15 10:37:36.556410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:63328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.910 [2024-11-15 10:37:36.556419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.910 [2024-11-15 10:37:36.556430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:63336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.910 [2024-11-15 10:37:36.556446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.910 [2024-11-15 10:37:36.556457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.910 [2024-11-15 10:37:36.556466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.910 [2024-11-15 10:37:36.556477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:63352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.910 [2024-11-15 10:37:36.556486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.910 [2024-11-15 10:37:36.556497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:63360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.910 [2024-11-15 10:37:36.556506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.910 [2024-11-15 10:37:36.556516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:63368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.910 [2024-11-15 10:37:36.556526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.910 [2024-11-15 10:37:36.556537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:62672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.910 [2024-11-15 10:37:36.556546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.911 [2024-11-15 10:37:36.556557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:62680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.911 [2024-11-15 10:37:36.556566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.911 [2024-11-15 10:37:36.556577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:62688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.911 [2024-11-15 10:37:36.556586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.911 [2024-11-15 10:37:36.556597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:62696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.911 [2024-11-15 10:37:36.556608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.911 [2024-11-15 10:37:36.556620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:62704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.911 [2024-11-15 10:37:36.556629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.911 [2024-11-15 10:37:36.556641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:62712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.911 [2024-11-15 10:37:36.556650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.911 [2024-11-15 10:37:36.556661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:62720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.911 [2024-11-15 10:37:36.556671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.911 [2024-11-15 10:37:36.556682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:62728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.911 [2024-11-15 10:37:36.556691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.911 [2024-11-15 10:37:36.556702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:62736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.911 [2024-11-15 10:37:36.556711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.911 [2024-11-15 10:37:36.556723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:62744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.911 [2024-11-15 10:37:36.556733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.911 [2024-11-15 10:37:36.556744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:62752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.911 [2024-11-15 10:37:36.556754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.911 [2024-11-15 10:37:36.556765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:62760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.911 [2024-11-15 10:37:36.556774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.911 [2024-11-15 10:37:36.556785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:62768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.911 [2024-11-15 10:37:36.556796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.911 [2024-11-15 10:37:36.556808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:62776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.911 [2024-11-15 10:37:36.556817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.911 [2024-11-15 10:37:36.556828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:62784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.911 [2024-11-15 10:37:36.556837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.911 [2024-11-15 10:37:36.556848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:62792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.911 [2024-11-15 10:37:36.556858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.911 [2024-11-15 10:37:36.556869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:62800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.911 [2024-11-15 10:37:36.556879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.911 [2024-11-15 10:37:36.556890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:62808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.911 [2024-11-15 10:37:36.556899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.911 [2024-11-15 10:37:36.556910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:62816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.911 [2024-11-15 10:37:36.556919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.911 [2024-11-15 10:37:36.556931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:62824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.911 [2024-11-15 10:37:36.556940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.911 [2024-11-15 10:37:36.556951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:62832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.911 [2024-11-15 10:37:36.556960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.911 [2024-11-15 10:37:36.556971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:62840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.911 [2024-11-15 10:37:36.556981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.911 [2024-11-15 10:37:36.556992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:62848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.911 [2024-11-15 10:37:36.557001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.911 [2024-11-15 10:37:36.557012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:62856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.911 [2024-11-15 10:37:36.557021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.911 [2024-11-15 10:37:36.557032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:63376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.911 [2024-11-15 10:37:36.557041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.911 [2024-11-15 10:37:36.557062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.911 [2024-11-15 10:37:36.557073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.911 [2024-11-15 10:37:36.557084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:63392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.911 [2024-11-15 10:37:36.557093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.911 [2024-11-15 10:37:36.557104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:63400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.911 [2024-11-15 10:37:36.557113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.911 [2024-11-15 10:37:36.557124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:63408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.911 [2024-11-15 10:37:36.557133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.911 [2024-11-15 10:37:36.557144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:63416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.911 [2024-11-15 10:37:36.557154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.911 [2024-11-15 10:37:36.557164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:63424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.911 [2024-11-15 10:37:36.557173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.911 [2024-11-15 10:37:36.557185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:63432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.911 [2024-11-15 10:37:36.557194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.911 [2024-11-15 10:37:36.557205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:62864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.911 [2024-11-15 10:37:36.557214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.912 [2024-11-15 10:37:36.557225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:62872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.912 [2024-11-15 10:37:36.557234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.912 [2024-11-15 10:37:36.557246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:62880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.912 [2024-11-15 10:37:36.557255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.912 [2024-11-15 10:37:36.557266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:62888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.912 [2024-11-15 10:37:36.557276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.912 [2024-11-15 10:37:36.557287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:62896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.912 [2024-11-15 10:37:36.557297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.912 [2024-11-15 10:37:36.557308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:62904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.912 [2024-11-15 10:37:36.557325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.912 [2024-11-15 10:37:36.557337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:62912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.912 [2024-11-15 10:37:36.557346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.912 [2024-11-15 10:37:36.557357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:62920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.912 [2024-11-15 10:37:36.557367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.912 [2024-11-15 10:37:36.557378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:62928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.912 [2024-11-15 10:37:36.557387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.912 [2024-11-15 10:37:36.557398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:62936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.912 [2024-11-15 10:37:36.557407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.912 [2024-11-15 10:37:36.557418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:62944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.912 [2024-11-15 10:37:36.557427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.912 [2024-11-15 10:37:36.557438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:62952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.912 [2024-11-15 10:37:36.557447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.912 [2024-11-15 10:37:36.557458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:62960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.912 [2024-11-15 10:37:36.557467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.912 [2024-11-15 10:37:36.557478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:62968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.912 [2024-11-15 10:37:36.557487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.912 [2024-11-15 10:37:36.557498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:62976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.912 [2024-11-15 10:37:36.557508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.912 [2024-11-15 10:37:36.557518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:62984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.912 [2024-11-15 10:37:36.557528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.912 [2024-11-15 10:37:36.557539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:63440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.912 [2024-11-15 10:37:36.557548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.912 [2024-11-15 10:37:36.557559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:63448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.912 [2024-11-15 10:37:36.557568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.912 [2024-11-15 10:37:36.557579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:63456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.912 [2024-11-15 10:37:36.557589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.912 [2024-11-15 10:37:36.557599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:63464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.912 [2024-11-15 10:37:36.557609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.912 [2024-11-15 10:37:36.557620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:63472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.912 [2024-11-15 10:37:36.557629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.912 [2024-11-15 10:37:36.557640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:63480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.912 [2024-11-15 10:37:36.557654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.912 [2024-11-15 10:37:36.557665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:63488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.912 [2024-11-15 10:37:36.557675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.912 [2024-11-15 10:37:36.557686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:63496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.912 [2024-11-15 10:37:36.557696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.912 [2024-11-15 10:37:36.557707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:62992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.912 [2024-11-15 10:37:36.557716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.912 [2024-11-15 10:37:36.557728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:63000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.912 [2024-11-15 10:37:36.557737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.912 [2024-11-15 10:37:36.557748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:63008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.912 [2024-11-15 10:37:36.557757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.912 [2024-11-15 10:37:36.557768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:63016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.912 [2024-11-15 10:37:36.557777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.913 [2024-11-15 10:37:36.557788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:63024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.913 [2024-11-15 10:37:36.557797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.913 [2024-11-15 10:37:36.557808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:63032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.913 [2024-11-15 10:37:36.557817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.913 [2024-11-15 10:37:36.557829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:63040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.913 [2024-11-15 10:37:36.557838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.913 [2024-11-15 10:37:36.557848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:63048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.913 [2024-11-15 10:37:36.557858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.913 [2024-11-15 10:37:36.557869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:63056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.913 [2024-11-15 10:37:36.557878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.913 [2024-11-15 10:37:36.557889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:63064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.913 [2024-11-15 10:37:36.557898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.913 [2024-11-15 10:37:36.557909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:63072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.913 [2024-11-15 10:37:36.557919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.913 [2024-11-15 10:37:36.557930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:63080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.913 [2024-11-15 10:37:36.557939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.913 [2024-11-15 10:37:36.557950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:63088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.913 [2024-11-15 10:37:36.557959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.913 [2024-11-15 10:37:36.557971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:63096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.913 [2024-11-15 10:37:36.557985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.913 [2024-11-15 10:37:36.558004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:63104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.913 [2024-11-15 10:37:36.558013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.913 [2024-11-15 10:37:36.558023] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0f70 is same with the state(6) to be set 00:19:35.913 [2024-11-15 10:37:36.558036] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:35.913 [2024-11-15 10:37:36.558044] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:35.913 [2024-11-15 10:37:36.558062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63112 len:8 PRP1 0x0 PRP2 0x0 00:19:35.913 [2024-11-15 10:37:36.558072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.913 [2024-11-15 10:37:36.558390] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:19:35.913 [2024-11-15 10:37:36.558482] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x863e50 (9): Bad file descriptor 00:19:35.913 [2024-11-15 10:37:36.558592] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:35.913 [2024-11-15 10:37:36.558613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863e50 with addr=10.0.0.3, port=4420 00:19:35.913 [2024-11-15 10:37:36.558623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863e50 is same with the state(6) to be set 00:19:35.913 [2024-11-15 10:37:36.558641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x863e50 (9): Bad file descriptor 00:19:35.913 [2024-11-15 10:37:36.558657] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:19:35.913 [2024-11-15 10:37:36.558666] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:19:35.913 [2024-11-15 10:37:36.558676] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:19:35.913 [2024-11-15 10:37:36.558686] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:19:35.913 [2024-11-15 10:37:36.558697] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:19:35.913 10:37:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:19:36.914 3905.00 IOPS, 15.25 MiB/s [2024-11-15T10:37:37.767Z] [2024-11-15 10:37:37.558862] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:36.915 [2024-11-15 10:37:37.558945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863e50 with addr=10.0.0.3, port=4420 00:19:36.915 [2024-11-15 10:37:37.558963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863e50 is same with the state(6) to be set 00:19:36.915 [2024-11-15 10:37:37.559005] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x863e50 (9): Bad file descriptor 00:19:36.915 [2024-11-15 10:37:37.559044] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:19:36.915 [2024-11-15 10:37:37.559076] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:19:36.915 [2024-11-15 10:37:37.559088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:19:36.915 [2024-11-15 10:37:37.559100] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:19:36.915 [2024-11-15 10:37:37.559112] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:19:36.915 10:37:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:37.172 [2024-11-15 10:37:37.810480] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:37.172 10:37:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 82358 00:19:37.739 2603.33 IOPS, 10.17 MiB/s [2024-11-15T10:37:38.592Z] [2024-11-15 10:37:38.572801] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:19:39.606 1952.50 IOPS, 7.63 MiB/s [2024-11-15T10:37:41.836Z] 2992.60 IOPS, 11.69 MiB/s [2024-11-15T10:37:42.771Z] 3887.17 IOPS, 15.18 MiB/s [2024-11-15T10:37:43.706Z] 4535.29 IOPS, 17.72 MiB/s [2024-11-15T10:37:44.640Z] 5010.38 IOPS, 19.57 MiB/s [2024-11-15T10:37:45.575Z] 5358.78 IOPS, 20.93 MiB/s [2024-11-15T10:37:45.575Z] 5627.90 IOPS, 21.98 MiB/s 00:19:44.723 Latency(us) 00:19:44.723 [2024-11-15T10:37:45.576Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:44.723 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:44.723 Verification LBA range: start 0x0 length 0x4000 00:19:44.723 NVMe0n1 : 10.01 5633.39 22.01 0.00 0.00 22676.14 1809.69 3019898.88 00:19:44.723 [2024-11-15T10:37:45.576Z] =================================================================================================================== 00:19:44.723 [2024-11-15T10:37:45.576Z] Total : 5633.39 22.01 0.00 0.00 22676.14 1809.69 3019898.88 00:19:44.723 { 00:19:44.723 "results": [ 00:19:44.723 { 00:19:44.723 "job": "NVMe0n1", 00:19:44.723 "core_mask": "0x4", 00:19:44.723 "workload": "verify", 00:19:44.723 "status": "finished", 00:19:44.723 "verify_range": { 00:19:44.723 "start": 0, 00:19:44.723 "length": 16384 00:19:44.723 }, 00:19:44.723 "queue_depth": 128, 00:19:44.723 "io_size": 4096, 00:19:44.723 "runtime": 10.009427, 00:19:44.723 "iops": 5633.389403809029, 00:19:44.723 "mibps": 22.00542735862902, 00:19:44.723 "io_failed": 0, 00:19:44.723 "io_timeout": 0, 00:19:44.723 "avg_latency_us": 22676.138414624904, 00:19:44.723 "min_latency_us": 1809.6872727272728, 00:19:44.723 "max_latency_us": 3019898.88 00:19:44.723 } 00:19:44.723 ], 00:19:44.723 "core_count": 1 00:19:44.723 } 00:19:44.723 10:37:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:44.723 10:37:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=82463 00:19:44.723 10:37:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:19:44.723 Running I/O for 10 seconds... 00:19:45.667 10:37:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:45.929 6484.00 IOPS, 25.33 MiB/s [2024-11-15T10:37:46.782Z] [2024-11-15 10:37:46.702129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.929 [2024-11-15 10:37:46.702246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.929 [2024-11-15 10:37:46.702275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.929 [2024-11-15 10:37:46.702285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.929 [2024-11-15 10:37:46.702294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.929 [2024-11-15 10:37:46.702303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.929 [2024-11-15 10:37:46.702313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.929 [2024-11-15 10:37:46.702321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.929 [2024-11-15 10:37:46.702330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.929 [2024-11-15 10:37:46.702339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.929 [2024-11-15 10:37:46.702348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.929 [2024-11-15 10:37:46.702356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.929 [2024-11-15 10:37:46.702365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.929 [2024-11-15 10:37:46.702374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.929 [2024-11-15 10:37:46.702382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.929 [2024-11-15 10:37:46.702390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.929 [2024-11-15 10:37:46.702398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.929 [2024-11-15 10:37:46.702406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.929 [2024-11-15 10:37:46.702414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.929 [2024-11-15 10:37:46.702422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.929 [2024-11-15 10:37:46.702430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.929 [2024-11-15 10:37:46.702438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.929 [2024-11-15 10:37:46.702446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.929 [2024-11-15 10:37:46.702454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.929 [2024-11-15 10:37:46.702462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.929 [2024-11-15 10:37:46.702469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.929 [2024-11-15 10:37:46.702477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.929 [2024-11-15 10:37:46.702485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.929 [2024-11-15 10:37:46.702506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.929 [2024-11-15 10:37:46.702516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.929 [2024-11-15 10:37:46.702524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.929 [2024-11-15 10:37:46.702535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.929 [2024-11-15 10:37:46.702545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.929 [2024-11-15 10:37:46.702554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.929 [2024-11-15 10:37:46.702563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.929 [2024-11-15 10:37:46.702572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.930 [2024-11-15 10:37:46.702580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.930 [2024-11-15 10:37:46.702589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.930 [2024-11-15 10:37:46.702598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.930 [2024-11-15 10:37:46.702606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.930 [2024-11-15 10:37:46.702615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.930 [2024-11-15 10:37:46.702623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.930 [2024-11-15 10:37:46.702631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.930 [2024-11-15 10:37:46.702639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.930 [2024-11-15 10:37:46.702648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.930 [2024-11-15 10:37:46.702657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.930 [2024-11-15 10:37:46.702665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.930 [2024-11-15 10:37:46.702673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.930 [2024-11-15 10:37:46.702681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.930 [2024-11-15 10:37:46.702689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.930 [2024-11-15 10:37:46.702698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.930 [2024-11-15 10:37:46.702706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.930 [2024-11-15 10:37:46.702715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.930 [2024-11-15 10:37:46.702723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.930 [2024-11-15 10:37:46.702731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.930 [2024-11-15 10:37:46.702740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.930 [2024-11-15 10:37:46.702748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.930 [2024-11-15 10:37:46.702756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.930 [2024-11-15 10:37:46.702764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.930 [2024-11-15 10:37:46.702773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.930 [2024-11-15 10:37:46.702781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.930 [2024-11-15 10:37:46.702789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.930 [2024-11-15 10:37:46.702797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.930 [2024-11-15 10:37:46.702806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.930 [2024-11-15 10:37:46.702814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.930 [2024-11-15 10:37:46.702823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.930 [2024-11-15 10:37:46.702831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.930 [2024-11-15 10:37:46.702840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.930 [2024-11-15 10:37:46.702849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.930 [2024-11-15 10:37:46.702857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.930 [2024-11-15 10:37:46.702866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.930 [2024-11-15 10:37:46.702874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.930 [2024-11-15 10:37:46.702882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.930 [2024-11-15 10:37:46.702890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.930 [2024-11-15 10:37:46.702899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.930 [2024-11-15 10:37:46.702907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.930 [2024-11-15 10:37:46.702916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.930 [2024-11-15 10:37:46.702924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.930 [2024-11-15 10:37:46.702932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.930 [2024-11-15 10:37:46.702940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.930 [2024-11-15 10:37:46.702948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.930 [2024-11-15 10:37:46.702956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.930 [2024-11-15 10:37:46.702966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.930 [2024-11-15 10:37:46.702974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.930 [2024-11-15 10:37:46.702982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.930 [2024-11-15 10:37:46.702991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.930 [2024-11-15 10:37:46.702999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.930 [2024-11-15 10:37:46.703008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.930 [2024-11-15 10:37:46.703015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.930 [2024-11-15 10:37:46.703024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.930 [2024-11-15 10:37:46.703032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f150 is same with the state(6) to be set 00:19:45.930 [2024-11-15 10:37:46.703099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:58248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.930 [2024-11-15 10:37:46.703145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.930 [2024-11-15 10:37:46.703168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:58256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.930 [2024-11-15 10:37:46.703180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.930 [2024-11-15 10:37:46.703192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:58264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.930 [2024-11-15 10:37:46.703201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.930 [2024-11-15 10:37:46.703213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:58272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.930 [2024-11-15 10:37:46.703223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.930 [2024-11-15 10:37:46.703234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:58280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.930 [2024-11-15 10:37:46.703244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.930 [2024-11-15 10:37:46.703261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:58288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.930 [2024-11-15 10:37:46.703272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.930 [2024-11-15 10:37:46.703295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:58296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.930 [2024-11-15 10:37:46.703327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.930 [2024-11-15 10:37:46.703358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:58304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.930 [2024-11-15 10:37:46.703368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.930 [2024-11-15 10:37:46.703379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:58312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.930 [2024-11-15 10:37:46.703389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.930 [2024-11-15 10:37:46.703405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:58320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.930 [2024-11-15 10:37:46.703414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.930 [2024-11-15 10:37:46.703430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:58328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.930 [2024-11-15 10:37:46.703439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.930 [2024-11-15 10:37:46.703451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:58336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.930 [2024-11-15 10:37:46.703460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.930 [2024-11-15 10:37:46.703476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:58344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.930 [2024-11-15 10:37:46.703486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.930 [2024-11-15 10:37:46.703497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:58352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.931 [2024-11-15 10:37:46.703514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.931 [2024-11-15 10:37:46.703529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:58360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.931 [2024-11-15 10:37:46.703543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.931 [2024-11-15 10:37:46.703576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:58368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.931 [2024-11-15 10:37:46.703585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.931 [2024-11-15 10:37:46.703604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:58376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.931 [2024-11-15 10:37:46.703624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.931 [2024-11-15 10:37:46.703636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:58384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.931 [2024-11-15 10:37:46.703645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.931 [2024-11-15 10:37:46.703657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:58392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.931 [2024-11-15 10:37:46.703666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.931 [2024-11-15 10:37:46.703677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:58400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.931 [2024-11-15 10:37:46.703687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.931 [2024-11-15 10:37:46.703699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:58408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.931 [2024-11-15 10:37:46.703708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.931 [2024-11-15 10:37:46.703719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:58416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.931 [2024-11-15 10:37:46.703728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.931 [2024-11-15 10:37:46.703739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:58424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.931 [2024-11-15 10:37:46.703748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.931 [2024-11-15 10:37:46.703760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:58432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.931 [2024-11-15 10:37:46.703769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.931 [2024-11-15 10:37:46.703781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:58440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.931 [2024-11-15 10:37:46.703790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.931 [2024-11-15 10:37:46.703802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:58448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.931 [2024-11-15 10:37:46.703811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.931 [2024-11-15 10:37:46.703822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:58456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.931 [2024-11-15 10:37:46.703832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.931 [2024-11-15 10:37:46.703843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:58464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.931 [2024-11-15 10:37:46.703853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.931 [2024-11-15 10:37:46.703865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:58472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.931 [2024-11-15 10:37:46.703882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.931 [2024-11-15 10:37:46.703894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:58480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.931 [2024-11-15 10:37:46.703909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.931 [2024-11-15 10:37:46.703921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:58488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.931 [2024-11-15 10:37:46.703930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.931 [2024-11-15 10:37:46.703941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:58496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.931 [2024-11-15 10:37:46.703951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.931 [2024-11-15 10:37:46.703966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:58504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.931 [2024-11-15 10:37:46.703976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.931 [2024-11-15 10:37:46.703987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:58512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.931 [2024-11-15 10:37:46.703997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.931 [2024-11-15 10:37:46.704008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:58520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.931 [2024-11-15 10:37:46.704017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.931 [2024-11-15 10:37:46.704028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:58528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.931 [2024-11-15 10:37:46.704037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.931 [2024-11-15 10:37:46.704048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:58536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.931 [2024-11-15 10:37:46.704072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.931 [2024-11-15 10:37:46.704083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:58544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.931 [2024-11-15 10:37:46.704093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.931 [2024-11-15 10:37:46.704104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:58552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.931 [2024-11-15 10:37:46.704113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.931 [2024-11-15 10:37:46.704124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:58560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.931 [2024-11-15 10:37:46.704133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.931 [2024-11-15 10:37:46.704147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:58568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.931 [2024-11-15 10:37:46.704157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.931 [2024-11-15 10:37:46.704168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:58576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.931 [2024-11-15 10:37:46.704178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.931 [2024-11-15 10:37:46.704189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:58584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.931 [2024-11-15 10:37:46.704199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.931 [2024-11-15 10:37:46.704211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:58592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.931 [2024-11-15 10:37:46.704220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.931 [2024-11-15 10:37:46.704232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:58600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.931 [2024-11-15 10:37:46.704246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.931 [2024-11-15 10:37:46.704257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:58608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.931 [2024-11-15 10:37:46.704267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.931 [2024-11-15 10:37:46.704278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:58616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.931 [2024-11-15 10:37:46.704287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.931 [2024-11-15 10:37:46.704298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:58624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.931 [2024-11-15 10:37:46.704308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.931 [2024-11-15 10:37:46.704319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:58632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.931 [2024-11-15 10:37:46.704328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.931 [2024-11-15 10:37:46.704340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:58640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.931 [2024-11-15 10:37:46.704348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.931 [2024-11-15 10:37:46.704359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:58648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.931 [2024-11-15 10:37:46.704369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.931 [2024-11-15 10:37:46.704380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:58656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.932 [2024-11-15 10:37:46.704399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.932 [2024-11-15 10:37:46.704409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:58664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.932 [2024-11-15 10:37:46.704418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.932 [2024-11-15 10:37:46.704429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:58672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.932 [2024-11-15 10:37:46.704438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.932 [2024-11-15 10:37:46.704457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:58680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.932 [2024-11-15 10:37:46.704470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.932 [2024-11-15 10:37:46.704481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:58688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.932 [2024-11-15 10:37:46.704490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.932 [2024-11-15 10:37:46.704501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:58696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.932 [2024-11-15 10:37:46.704510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.932 [2024-11-15 10:37:46.704521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:58704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.932 [2024-11-15 10:37:46.704530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.932 [2024-11-15 10:37:46.704541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:58712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.932 [2024-11-15 10:37:46.704550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.932 [2024-11-15 10:37:46.704562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:58720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.932 [2024-11-15 10:37:46.704571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.932 [2024-11-15 10:37:46.704583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:58728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.932 [2024-11-15 10:37:46.704596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.932 [2024-11-15 10:37:46.704607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:58736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.932 [2024-11-15 10:37:46.704617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.932 [2024-11-15 10:37:46.704628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:58744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.932 [2024-11-15 10:37:46.704637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.932 [2024-11-15 10:37:46.704648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:58752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.932 [2024-11-15 10:37:46.704657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.932 [2024-11-15 10:37:46.704669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:58760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.932 [2024-11-15 10:37:46.704678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.932 [2024-11-15 10:37:46.704689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:58768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.932 [2024-11-15 10:37:46.704698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.932 [2024-11-15 10:37:46.704709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:58776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.932 [2024-11-15 10:37:46.704718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.932 [2024-11-15 10:37:46.704729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:58784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.932 [2024-11-15 10:37:46.704738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.932 [2024-11-15 10:37:46.704749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:58792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.932 [2024-11-15 10:37:46.704758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.932 [2024-11-15 10:37:46.704769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:58800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.932 [2024-11-15 10:37:46.704778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.932 [2024-11-15 10:37:46.704789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:58808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.932 [2024-11-15 10:37:46.704798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.932 [2024-11-15 10:37:46.704810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:58816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.932 [2024-11-15 10:37:46.704819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.932 [2024-11-15 10:37:46.704831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:58824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.932 [2024-11-15 10:37:46.704840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.932 [2024-11-15 10:37:46.704851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:58832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.932 [2024-11-15 10:37:46.704860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.932 [2024-11-15 10:37:46.704872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:58840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.932 [2024-11-15 10:37:46.704881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.932 [2024-11-15 10:37:46.704901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:58848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.932 [2024-11-15 10:37:46.704910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.932 [2024-11-15 10:37:46.704921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:58856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.932 [2024-11-15 10:37:46.704936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.932 [2024-11-15 10:37:46.704947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:58864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.932 [2024-11-15 10:37:46.704956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.932 [2024-11-15 10:37:46.704967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:58872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.932 [2024-11-15 10:37:46.704976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.932 [2024-11-15 10:37:46.704987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:58880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.932 [2024-11-15 10:37:46.704997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.932 [2024-11-15 10:37:46.705008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:58888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.932 [2024-11-15 10:37:46.705017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.932 [2024-11-15 10:37:46.705029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:58896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.932 [2024-11-15 10:37:46.705038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.932 [2024-11-15 10:37:46.705058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:58904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.932 [2024-11-15 10:37:46.705070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.932 [2024-11-15 10:37:46.705081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:58912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.932 [2024-11-15 10:37:46.705090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.932 [2024-11-15 10:37:46.705101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:58920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.932 [2024-11-15 10:37:46.705110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.932 [2024-11-15 10:37:46.705122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:58928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.932 [2024-11-15 10:37:46.705130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.932 [2024-11-15 10:37:46.705142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:58936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.932 [2024-11-15 10:37:46.705151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.932 [2024-11-15 10:37:46.705162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:59152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.932 [2024-11-15 10:37:46.705171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.932 [2024-11-15 10:37:46.705182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:59160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.932 [2024-11-15 10:37:46.705191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.932 [2024-11-15 10:37:46.705202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:59168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.932 [2024-11-15 10:37:46.705221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.932 [2024-11-15 10:37:46.705232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:59176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.932 [2024-11-15 10:37:46.705241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.933 [2024-11-15 10:37:46.705257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:59184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.933 [2024-11-15 10:37:46.705266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.933 [2024-11-15 10:37:46.705277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:59192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.933 [2024-11-15 10:37:46.705291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.933 [2024-11-15 10:37:46.705302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:59200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.933 [2024-11-15 10:37:46.705311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.933 [2024-11-15 10:37:46.705322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:58944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.933 [2024-11-15 10:37:46.705331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.933 [2024-11-15 10:37:46.705342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:58952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.933 [2024-11-15 10:37:46.705351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.933 [2024-11-15 10:37:46.705362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:58960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.933 [2024-11-15 10:37:46.705371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.933 [2024-11-15 10:37:46.705383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:58968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.933 [2024-11-15 10:37:46.705392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.933 [2024-11-15 10:37:46.705403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:58976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.933 [2024-11-15 10:37:46.705412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.933 [2024-11-15 10:37:46.705423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:58984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.933 [2024-11-15 10:37:46.705432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.933 [2024-11-15 10:37:46.705444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:58992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.933 [2024-11-15 10:37:46.705453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.933 [2024-11-15 10:37:46.705464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:59000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.933 [2024-11-15 10:37:46.705473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.933 [2024-11-15 10:37:46.705484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:59008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.933 [2024-11-15 10:37:46.705493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.933 [2024-11-15 10:37:46.705505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:59016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.933 [2024-11-15 10:37:46.705514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.933 [2024-11-15 10:37:46.705525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:59024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.933 [2024-11-15 10:37:46.705534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.933 [2024-11-15 10:37:46.705545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:59032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.933 [2024-11-15 10:37:46.705554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.933 [2024-11-15 10:37:46.705566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:59040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.933 [2024-11-15 10:37:46.705575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.933 [2024-11-15 10:37:46.705590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:59048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.933 [2024-11-15 10:37:46.705600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.933 [2024-11-15 10:37:46.705611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:59056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.933 [2024-11-15 10:37:46.705622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.933 [2024-11-15 10:37:46.705634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:59064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.933 [2024-11-15 10:37:46.705643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.933 [2024-11-15 10:37:46.705654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:59072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.933 [2024-11-15 10:37:46.705663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.933 [2024-11-15 10:37:46.705675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:59080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.933 [2024-11-15 10:37:46.705684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.933 [2024-11-15 10:37:46.705694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:59088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.933 [2024-11-15 10:37:46.705704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.933 [2024-11-15 10:37:46.705715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:59096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.933 [2024-11-15 10:37:46.705724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.933 [2024-11-15 10:37:46.705735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:59104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.933 [2024-11-15 10:37:46.705744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.933 [2024-11-15 10:37:46.705754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:59112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.933 [2024-11-15 10:37:46.705764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.933 [2024-11-15 10:37:46.705775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:59120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.933 [2024-11-15 10:37:46.705784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.933 [2024-11-15 10:37:46.705795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:59128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.933 [2024-11-15 10:37:46.705804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.933 [2024-11-15 10:37:46.705815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:59208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.933 [2024-11-15 10:37:46.705824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.933 [2024-11-15 10:37:46.705835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:59216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.933 [2024-11-15 10:37:46.705844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.933 [2024-11-15 10:37:46.705855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:59224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.933 [2024-11-15 10:37:46.705864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.933 [2024-11-15 10:37:46.705874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:59232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.933 [2024-11-15 10:37:46.705883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.933 [2024-11-15 10:37:46.705894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:59240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.933 [2024-11-15 10:37:46.705903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.933 [2024-11-15 10:37:46.705918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:59248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.933 [2024-11-15 10:37:46.705927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.933 [2024-11-15 10:37:46.705939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:59256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.933 [2024-11-15 10:37:46.705948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.933 [2024-11-15 10:37:46.705960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:59264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.933 [2024-11-15 10:37:46.705969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.933 [2024-11-15 10:37:46.705980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:59136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.933 [2024-11-15 10:37:46.705989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.933 [2024-11-15 10:37:46.705999] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2150 is same with the state(6) to be set 00:19:45.933 [2024-11-15 10:37:46.706011] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:45.933 [2024-11-15 10:37:46.706031] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:45.933 [2024-11-15 10:37:46.706039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59144 len:8 PRP1 0x0 PRP2 0x0 00:19:45.933 [2024-11-15 10:37:46.706057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.933 [2024-11-15 10:37:46.706359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:19:45.933 [2024-11-15 10:37:46.706450] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x863e50 (9): Bad file descriptor 00:19:45.934 [2024-11-15 10:37:46.706559] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:45.934 [2024-11-15 10:37:46.706589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863e50 with addr=10.0.0.3, port=4420 00:19:45.934 [2024-11-15 10:37:46.706600] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863e50 is same with the state(6) to be set 00:19:45.934 [2024-11-15 10:37:46.706618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x863e50 (9): Bad file descriptor 00:19:45.934 [2024-11-15 10:37:46.706633] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:19:45.934 [2024-11-15 10:37:46.706643] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:19:45.934 [2024-11-15 10:37:46.706653] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:19:45.934 [2024-11-15 10:37:46.706663] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:19:45.934 [2024-11-15 10:37:46.706674] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:19:45.934 10:37:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:19:46.870 3640.50 IOPS, 14.22 MiB/s [2024-11-15T10:37:47.723Z] [2024-11-15 10:37:47.706835] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:46.870 [2024-11-15 10:37:47.706910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863e50 with addr=10.0.0.3, port=4420 00:19:46.870 [2024-11-15 10:37:47.706928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863e50 is same with the state(6) to be set 00:19:46.870 [2024-11-15 10:37:47.706964] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x863e50 (9): Bad file descriptor 00:19:46.870 [2024-11-15 10:37:47.707003] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:19:46.870 [2024-11-15 10:37:47.707014] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:19:46.870 [2024-11-15 10:37:47.707024] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:19:46.870 [2024-11-15 10:37:47.707035] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:19:46.870 [2024-11-15 10:37:47.707047] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:19:48.063 2427.00 IOPS, 9.48 MiB/s [2024-11-15T10:37:48.916Z] [2024-11-15 10:37:48.707221] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:48.063 [2024-11-15 10:37:48.707300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863e50 with addr=10.0.0.3, port=4420 00:19:48.063 [2024-11-15 10:37:48.707349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863e50 is same with the state(6) to be set 00:19:48.063 [2024-11-15 10:37:48.707375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x863e50 (9): Bad file descriptor 00:19:48.063 [2024-11-15 10:37:48.707394] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:19:48.063 [2024-11-15 10:37:48.707404] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:19:48.063 [2024-11-15 10:37:48.707415] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:19:48.063 [2024-11-15 10:37:48.707426] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:19:48.063 [2024-11-15 10:37:48.707437] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:19:48.999 1820.25 IOPS, 7.11 MiB/s [2024-11-15T10:37:49.852Z] [2024-11-15 10:37:49.711123] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:48.999 [2024-11-15 10:37:49.711366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863e50 with addr=10.0.0.3, port=4420 00:19:48.999 [2024-11-15 10:37:49.711393] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863e50 is same with the state(6) to be set 00:19:48.999 [2024-11-15 10:37:49.711654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x863e50 (9): Bad file descriptor 00:19:48.999 [2024-11-15 10:37:49.711901] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:19:48.999 [2024-11-15 10:37:49.711914] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:19:48.999 [2024-11-15 10:37:49.711927] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:19:48.999 [2024-11-15 10:37:49.711938] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:19:48.999 [2024-11-15 10:37:49.711950] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:19:49.000 10:37:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:49.258 [2024-11-15 10:37:50.020882] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:49.258 10:37:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 82463 00:19:50.084 1456.20 IOPS, 5.69 MiB/s [2024-11-15T10:37:50.937Z] [2024-11-15 10:37:50.738127] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:19:51.955 2501.33 IOPS, 9.77 MiB/s [2024-11-15T10:37:53.745Z] 3480.00 IOPS, 13.59 MiB/s [2024-11-15T10:37:54.683Z] 4217.00 IOPS, 16.47 MiB/s [2024-11-15T10:37:55.620Z] 4795.56 IOPS, 18.73 MiB/s [2024-11-15T10:37:55.620Z] 5256.40 IOPS, 20.53 MiB/s 00:19:54.767 Latency(us) 00:19:54.767 [2024-11-15T10:37:55.620Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:54.767 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:54.767 Verification LBA range: start 0x0 length 0x4000 00:19:54.767 NVMe0n1 : 10.01 5263.21 20.56 3644.48 0.00 14329.52 651.64 3019898.88 00:19:54.767 [2024-11-15T10:37:55.620Z] =================================================================================================================== 00:19:54.767 [2024-11-15T10:37:55.620Z] Total : 5263.21 20.56 3644.48 0.00 14329.52 0.00 3019898.88 00:19:54.767 { 00:19:54.767 "results": [ 00:19:54.767 { 00:19:54.767 "job": "NVMe0n1", 00:19:54.767 "core_mask": "0x4", 00:19:54.767 "workload": "verify", 00:19:54.767 "status": "finished", 00:19:54.767 "verify_range": { 00:19:54.767 "start": 0, 00:19:54.767 "length": 16384 00:19:54.767 }, 00:19:54.767 "queue_depth": 128, 00:19:54.767 "io_size": 4096, 00:19:54.767 "runtime": 10.009103, 00:19:54.767 "iops": 5263.208900937477, 00:19:54.767 "mibps": 20.55940976928702, 00:19:54.767 "io_failed": 36478, 00:19:54.767 "io_timeout": 0, 00:19:54.767 "avg_latency_us": 14329.517256759706, 00:19:54.768 "min_latency_us": 651.6363636363636, 00:19:54.768 "max_latency_us": 3019898.88 00:19:54.768 } 00:19:54.768 ], 00:19:54.768 "core_count": 1 00:19:54.768 } 00:19:54.768 10:37:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 82342 00:19:54.768 10:37:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # '[' -z 82342 ']' 00:19:54.768 10:37:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # kill -0 82342 00:19:54.768 10:37:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # uname 00:19:54.768 10:37:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:54.768 10:37:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 82342 00:19:55.026 10:37:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:55.026 10:37:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:55.026 10:37:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@970 -- # echo 'killing process with pid 82342' 00:19:55.026 killing process with pid 82342 00:19:55.026 Received shutdown signal, test time was about 10.000000 seconds 00:19:55.026 00:19:55.026 Latency(us) 00:19:55.026 [2024-11-15T10:37:55.879Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:55.026 [2024-11-15T10:37:55.879Z] =================================================================================================================== 00:19:55.026 [2024-11-15T10:37:55.879Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:55.026 10:37:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@971 -- # kill 82342 00:19:55.026 10:37:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@976 -- # wait 82342 00:19:55.026 10:37:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=82587 00:19:55.026 10:37:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:19:55.026 10:37:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 82587 /var/tmp/bdevperf.sock 00:19:55.026 10:37:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # '[' -z 82587 ']' 00:19:55.026 10:37:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:55.026 10:37:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:55.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:55.026 10:37:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:55.026 10:37:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:55.026 10:37:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:55.285 [2024-11-15 10:37:55.887253] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:19:55.285 [2024-11-15 10:37:55.887373] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82587 ] 00:19:55.285 [2024-11-15 10:37:56.034942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.285 [2024-11-15 10:37:56.098464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:55.544 [2024-11-15 10:37:56.154331] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:55.544 10:37:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:55.544 10:37:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@866 -- # return 0 00:19:55.544 10:37:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=82590 00:19:55.544 10:37:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 82587 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:19:55.544 10:37:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:19:55.803 10:37:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:19:56.061 NVMe0n1 00:19:56.061 10:37:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=82637 00:19:56.061 10:37:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:56.061 10:37:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:19:56.320 Running I/O for 10 seconds... 00:19:57.257 10:37:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:57.519 14859.00 IOPS, 58.04 MiB/s [2024-11-15T10:37:58.372Z] [2024-11-15 10:37:58.130199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.519 [2024-11-15 10:37:58.130271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.519 [2024-11-15 10:37:58.130283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.519 [2024-11-15 10:37:58.130292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.519 [2024-11-15 10:37:58.130301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.519 [2024-11-15 10:37:58.130309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.519 [2024-11-15 10:37:58.130318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.519 [2024-11-15 10:37:58.130326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.519 [2024-11-15 10:37:58.130335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.519 [2024-11-15 10:37:58.130344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.519 [2024-11-15 10:37:58.130353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.519 [2024-11-15 10:37:58.130361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.519 [2024-11-15 10:37:58.130372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.519 [2024-11-15 10:37:58.130380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.519 [2024-11-15 10:37:58.130388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.519 [2024-11-15 10:37:58.130396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.519 [2024-11-15 10:37:58.130405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.519 [2024-11-15 10:37:58.130413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.519 [2024-11-15 10:37:58.130421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.519 [2024-11-15 10:37:58.130429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.519 [2024-11-15 10:37:58.130438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.519 [2024-11-15 10:37:58.130446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.519 [2024-11-15 10:37:58.130454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.519 [2024-11-15 10:37:58.130462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.519 [2024-11-15 10:37:58.130470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.519 [2024-11-15 10:37:58.130479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.519 [2024-11-15 10:37:58.130487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.519 [2024-11-15 10:37:58.130495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.519 [2024-11-15 10:37:58.130503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.519 [2024-11-15 10:37:58.130511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.519 [2024-11-15 10:37:58.130519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.519 [2024-11-15 10:37:58.130527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.519 [2024-11-15 10:37:58.130535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.519 [2024-11-15 10:37:58.130545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.519 [2024-11-15 10:37:58.130555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.519 [2024-11-15 10:37:58.130563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.519 [2024-11-15 10:37:58.130571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.519 [2024-11-15 10:37:58.130579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.519 [2024-11-15 10:37:58.130587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.519 [2024-11-15 10:37:58.130596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.519 [2024-11-15 10:37:58.130604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.519 [2024-11-15 10:37:58.130613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.519 [2024-11-15 10:37:58.130621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.519 [2024-11-15 10:37:58.130629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.519 [2024-11-15 10:37:58.130637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.519 [2024-11-15 10:37:58.130645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.519 [2024-11-15 10:37:58.130653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.130668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.130677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.130685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.130694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.130702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.130710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.130718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.130725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.130733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.130741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.130749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.130763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.130771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.130780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.130789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.130797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.130805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.130814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.130823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.130832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.130840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.130847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.130855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.130863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.130871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.130879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.130887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.130895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.130903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.130911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.130919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.130926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.130934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.130942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.130950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.130958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.130966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.130973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.130984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.130992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.131000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.131007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.131015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.131023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.131031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.131040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.131059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.131068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.131076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.131085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.131095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.131103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.131119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.131127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.131135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.131144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.131151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.131159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.131167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.131175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.131183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.131191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.131200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.131208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.131216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.131230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.131249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.131257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.131265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.131273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.131281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.131290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.131298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.131306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.131325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.131343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.131351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b762c0 is same with the state(6) to be set 00:19:57.520 [2024-11-15 10:37:58.131413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:41776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.520 [2024-11-15 10:37:58.131444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.520 [2024-11-15 10:37:58.131469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:99752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.520 [2024-11-15 10:37:58.131482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.520 [2024-11-15 10:37:58.131496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:35368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.520 [2024-11-15 10:37:58.131506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.520 [2024-11-15 10:37:58.131518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:49984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.521 [2024-11-15 10:37:58.131529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.521 [2024-11-15 10:37:58.131541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:54992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.521 [2024-11-15 10:37:58.131551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.521 [2024-11-15 10:37:58.131563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:59200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.521 [2024-11-15 10:37:58.131574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.521 [2024-11-15 10:37:58.131585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:95000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.521 [2024-11-15 10:37:58.131595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.521 [2024-11-15 10:37:58.131608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:88064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.521 [2024-11-15 10:37:58.131618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.521 [2024-11-15 10:37:58.131630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:97800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.521 [2024-11-15 10:37:58.131640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.521 [2024-11-15 10:37:58.131659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:125024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.521 [2024-11-15 10:37:58.131669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.521 [2024-11-15 10:37:58.131683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.521 [2024-11-15 10:37:58.131693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.521 [2024-11-15 10:37:58.131708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:116704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.521 [2024-11-15 10:37:58.131718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.521 [2024-11-15 10:37:58.131734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:80024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.521 [2024-11-15 10:37:58.131744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.521 [2024-11-15 10:37:58.131756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:52112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.521 [2024-11-15 10:37:58.131766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.521 [2024-11-15 10:37:58.131778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:65584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.521 [2024-11-15 10:37:58.131788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.521 [2024-11-15 10:37:58.131800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:126496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.521 [2024-11-15 10:37:58.131811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.521 [2024-11-15 10:37:58.131823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:51072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.521 [2024-11-15 10:37:58.131836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.521 [2024-11-15 10:37:58.131848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.521 [2024-11-15 10:37:58.131859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.521 [2024-11-15 10:37:58.131871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.521 [2024-11-15 10:37:58.131881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.521 [2024-11-15 10:37:58.131893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:41856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.521 [2024-11-15 10:37:58.131903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.521 [2024-11-15 10:37:58.131915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:81288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.521 [2024-11-15 10:37:58.131925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.521 [2024-11-15 10:37:58.131938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:23960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.521 [2024-11-15 10:37:58.131948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.521 [2024-11-15 10:37:58.131960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:69880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.521 [2024-11-15 10:37:58.131970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.521 [2024-11-15 10:37:58.131982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.521 [2024-11-15 10:37:58.131992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.521 [2024-11-15 10:37:58.132004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:69304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.521 [2024-11-15 10:37:58.132014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.521 [2024-11-15 10:37:58.132026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:89344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.521 [2024-11-15 10:37:58.132037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.521 [2024-11-15 10:37:58.132060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:118312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.521 [2024-11-15 10:37:58.132073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.521 [2024-11-15 10:37:58.132086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:119352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.521 [2024-11-15 10:37:58.132096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.521 [2024-11-15 10:37:58.132108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:106696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.521 [2024-11-15 10:37:58.132118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.521 [2024-11-15 10:37:58.132130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.521 [2024-11-15 10:37:58.132141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.521 [2024-11-15 10:37:58.132153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:5784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.521 [2024-11-15 10:37:58.132163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.521 [2024-11-15 10:37:58.132176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:41104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.521 [2024-11-15 10:37:58.132187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.521 [2024-11-15 10:37:58.132199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:72400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.521 [2024-11-15 10:37:58.132210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.521 [2024-11-15 10:37:58.132222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:95016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.521 [2024-11-15 10:37:58.132232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.521 [2024-11-15 10:37:58.132244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:51232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.521 [2024-11-15 10:37:58.132254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.521 [2024-11-15 10:37:58.132266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:92192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.521 [2024-11-15 10:37:58.132276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.521 [2024-11-15 10:37:58.132288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:46456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.521 [2024-11-15 10:37:58.132299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.521 [2024-11-15 10:37:58.132310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:84856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.521 [2024-11-15 10:37:58.132321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.521 [2024-11-15 10:37:58.132333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:31280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.521 [2024-11-15 10:37:58.132343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.521 [2024-11-15 10:37:58.132355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:116008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.521 [2024-11-15 10:37:58.132366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.521 [2024-11-15 10:37:58.132379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:76512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.521 [2024-11-15 10:37:58.132389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.521 [2024-11-15 10:37:58.132401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:69992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.521 [2024-11-15 10:37:58.132411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.521 [2024-11-15 10:37:58.132423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:19872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.521 [2024-11-15 10:37:58.132433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-11-15 10:37:58.132445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:37224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-11-15 10:37:58.132455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-11-15 10:37:58.132467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:75440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-11-15 10:37:58.132477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-11-15 10:37:58.132489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:103304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-11-15 10:37:58.132499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-11-15 10:37:58.132511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:40152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-11-15 10:37:58.132521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-11-15 10:37:58.132542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:95272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-11-15 10:37:58.132552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-11-15 10:37:58.132564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:103944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-11-15 10:37:58.132575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-11-15 10:37:58.132587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:62944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-11-15 10:37:58.132597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-11-15 10:37:58.132609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:68552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-11-15 10:37:58.132619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-11-15 10:37:58.132631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:94760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-11-15 10:37:58.132641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-11-15 10:37:58.132653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:96152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-11-15 10:37:58.132663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-11-15 10:37:58.132675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:98784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-11-15 10:37:58.132685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-11-15 10:37:58.132697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:87992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-11-15 10:37:58.132707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-11-15 10:37:58.132720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:122288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-11-15 10:37:58.132730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-11-15 10:37:58.132742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:84856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-11-15 10:37:58.132752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-11-15 10:37:58.132764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:104096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-11-15 10:37:58.132775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-11-15 10:37:58.132786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:92072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-11-15 10:37:58.132797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-11-15 10:37:58.132809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:26240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-11-15 10:37:58.132819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-11-15 10:37:58.132831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-11-15 10:37:58.132841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-11-15 10:37:58.132853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:80024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-11-15 10:37:58.132863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-11-15 10:37:58.132875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:72688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-11-15 10:37:58.132885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-11-15 10:37:58.132902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:2528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-11-15 10:37:58.132912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-11-15 10:37:58.132925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:29248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-11-15 10:37:58.132935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-11-15 10:37:58.132947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:66000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-11-15 10:37:58.132958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-11-15 10:37:58.132970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-11-15 10:37:58.132980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-11-15 10:37:58.132992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-11-15 10:37:58.133003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-11-15 10:37:58.133014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:97520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-11-15 10:37:58.133025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-11-15 10:37:58.133037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:86480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-11-15 10:37:58.133047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-11-15 10:37:58.133070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:12960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-11-15 10:37:58.133081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-11-15 10:37:58.133098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:129248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-11-15 10:37:58.133109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-11-15 10:37:58.133121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:117128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-11-15 10:37:58.133131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-11-15 10:37:58.133143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:15768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-11-15 10:37:58.133154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-11-15 10:37:58.133165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:91464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-11-15 10:37:58.133176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-11-15 10:37:58.133188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:78472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-11-15 10:37:58.133198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-11-15 10:37:58.133210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:126880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-11-15 10:37:58.133220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-11-15 10:37:58.133232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:24488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-11-15 10:37:58.133242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-11-15 10:37:58.133254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:72928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-11-15 10:37:58.133264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-11-15 10:37:58.133281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:33480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-11-15 10:37:58.133291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-11-15 10:37:58.133303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:92264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-11-15 10:37:58.133314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-11-15 10:37:58.133326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:108560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-11-15 10:37:58.133336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-11-15 10:37:58.133349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:117056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-11-15 10:37:58.133359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-11-15 10:37:58.133371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:121456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-11-15 10:37:58.133381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-11-15 10:37:58.133393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:32384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-11-15 10:37:58.133404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-11-15 10:37:58.133416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:44096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-11-15 10:37:58.133426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-11-15 10:37:58.133438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:46040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-11-15 10:37:58.133448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-11-15 10:37:58.133468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:6072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-11-15 10:37:58.133479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-11-15 10:37:58.133491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:15880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-11-15 10:37:58.133501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-11-15 10:37:58.133513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:104272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-11-15 10:37:58.133523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-11-15 10:37:58.133536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:118656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-11-15 10:37:58.133546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-11-15 10:37:58.133558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:54896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-11-15 10:37:58.133568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-11-15 10:37:58.133580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:91312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-11-15 10:37:58.133590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-11-15 10:37:58.133603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:47552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-11-15 10:37:58.133613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-11-15 10:37:58.133625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:67744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-11-15 10:37:58.133635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-11-15 10:37:58.133651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:73680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-11-15 10:37:58.133662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-11-15 10:37:58.133674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:37256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-11-15 10:37:58.133684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-11-15 10:37:58.133695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:39304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-11-15 10:37:58.133705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-11-15 10:37:58.133718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:5408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-11-15 10:37:58.133728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-11-15 10:37:58.133740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:31400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-11-15 10:37:58.133750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-11-15 10:37:58.133762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:62792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-11-15 10:37:58.133772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-11-15 10:37:58.133784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:82488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-11-15 10:37:58.133794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-11-15 10:37:58.133806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:50272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-11-15 10:37:58.133816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-11-15 10:37:58.133833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:15160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-11-15 10:37:58.133843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-11-15 10:37:58.133855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:74248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-11-15 10:37:58.133865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-11-15 10:37:58.133877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:87248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-11-15 10:37:58.133888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-11-15 10:37:58.133900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:69448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-11-15 10:37:58.133910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-11-15 10:37:58.133922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:103224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-11-15 10:37:58.133932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-11-15 10:37:58.133943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:121704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-11-15 10:37:58.133954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-11-15 10:37:58.133966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:47064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-11-15 10:37:58.133976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-11-15 10:37:58.133988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:43064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-11-15 10:37:58.133998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-11-15 10:37:58.134015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:123392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-11-15 10:37:58.134026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-11-15 10:37:58.134038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:9888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-11-15 10:37:58.134056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-11-15 10:37:58.134070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:18256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-11-15 10:37:58.134081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-11-15 10:37:58.134093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:68376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-11-15 10:37:58.134103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-11-15 10:37:58.134115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:9288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-11-15 10:37:58.134125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-11-15 10:37:58.134137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:39488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.524 [2024-11-15 10:37:58.134147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.524 [2024-11-15 10:37:58.134159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:107704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.524 [2024-11-15 10:37:58.134170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.524 [2024-11-15 10:37:58.134182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:52552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.524 [2024-11-15 10:37:58.134192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.524 [2024-11-15 10:37:58.134209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:17496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.524 [2024-11-15 10:37:58.134219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.524 [2024-11-15 10:37:58.134231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:20088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.524 [2024-11-15 10:37:58.134241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.524 [2024-11-15 10:37:58.134253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:100120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.524 [2024-11-15 10:37:58.134264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.524 [2024-11-15 10:37:58.134276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:69040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.524 [2024-11-15 10:37:58.134286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.524 [2024-11-15 10:37:58.134298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:84464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.524 [2024-11-15 10:37:58.134308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.524 [2024-11-15 10:37:58.134320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:109792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.524 [2024-11-15 10:37:58.134330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.524 [2024-11-15 10:37:58.134342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.524 [2024-11-15 10:37:58.134352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.524 [2024-11-15 10:37:58.134364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:124072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.524 [2024-11-15 10:37:58.134374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.524 [2024-11-15 10:37:58.134390] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2e30 is same with the state(6) to be set 00:19:57.524 [2024-11-15 10:37:58.134404] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.524 [2024-11-15 10:37:58.134413] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.524 [2024-11-15 10:37:58.134422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:27224 len:8 PRP1 0x0 PRP2 0x0 00:19:57.524 [2024-11-15 10:37:58.134432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.524 [2024-11-15 10:37:58.134807] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:19:57.524 [2024-11-15 10:37:58.134916] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x765e50 (9): Bad file descriptor 00:19:57.524 [2024-11-15 10:37:58.135036] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:57.524 [2024-11-15 10:37:58.135074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x765e50 with addr=10.0.0.3, port=4420 00:19:57.524 [2024-11-15 10:37:58.135088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765e50 is same with the state(6) to be set 00:19:57.524 [2024-11-15 10:37:58.135107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x765e50 (9): Bad file descriptor 00:19:57.524 [2024-11-15 10:37:58.135125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:19:57.524 [2024-11-15 10:37:58.135136] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:19:57.524 [2024-11-15 10:37:58.135146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:19:57.524 [2024-11-15 10:37:58.135157] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:19:57.524 [2024-11-15 10:37:58.135168] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:19:57.524 10:37:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 82637 00:19:59.396 8510.00 IOPS, 33.24 MiB/s [2024-11-15T10:38:00.249Z] 5673.33 IOPS, 22.16 MiB/s [2024-11-15T10:38:00.249Z] [2024-11-15 10:38:00.135414] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:59.396 [2024-11-15 10:38:00.135490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x765e50 with addr=10.0.0.3, port=4420 00:19:59.396 [2024-11-15 10:38:00.135508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765e50 is same with the state(6) to be set 00:19:59.396 [2024-11-15 10:38:00.135537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x765e50 (9): Bad file descriptor 00:19:59.396 [2024-11-15 10:38:00.135584] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:19:59.396 [2024-11-15 10:38:00.135597] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:19:59.396 [2024-11-15 10:38:00.135609] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:19:59.396 [2024-11-15 10:38:00.135621] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:19:59.396 [2024-11-15 10:38:00.135633] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:20:01.342 4255.00 IOPS, 16.62 MiB/s [2024-11-15T10:38:02.195Z] 3404.00 IOPS, 13.30 MiB/s [2024-11-15T10:38:02.195Z] [2024-11-15 10:38:02.135863] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:01.342 [2024-11-15 10:38:02.135936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x765e50 with addr=10.0.0.3, port=4420 00:20:01.343 [2024-11-15 10:38:02.135954] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765e50 is same with the state(6) to be set 00:20:01.343 [2024-11-15 10:38:02.135982] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x765e50 (9): Bad file descriptor 00:20:01.343 [2024-11-15 10:38:02.136003] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:20:01.343 [2024-11-15 10:38:02.136014] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:20:01.343 [2024-11-15 10:38:02.136026] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:20:01.343 [2024-11-15 10:38:02.136038] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:20:01.343 [2024-11-15 10:38:02.136060] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:20:03.212 2836.67 IOPS, 11.08 MiB/s [2024-11-15T10:38:04.325Z] 2431.43 IOPS, 9.50 MiB/s [2024-11-15T10:38:04.325Z] [2024-11-15 10:38:04.136210] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:20:03.472 [2024-11-15 10:38:04.136279] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:20:03.472 [2024-11-15 10:38:04.136293] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:20:03.472 [2024-11-15 10:38:04.136305] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:20:03.472 [2024-11-15 10:38:04.136317] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:20:04.408 2127.50 IOPS, 8.31 MiB/s 00:20:04.408 Latency(us) 00:20:04.408 [2024-11-15T10:38:05.261Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:04.408 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:20:04.408 NVMe0n1 : 8.16 2085.97 8.15 15.69 0.00 60796.10 8400.52 7015926.69 00:20:04.408 [2024-11-15T10:38:05.261Z] =================================================================================================================== 00:20:04.408 [2024-11-15T10:38:05.261Z] Total : 2085.97 8.15 15.69 0.00 60796.10 8400.52 7015926.69 00:20:04.408 { 00:20:04.408 "results": [ 00:20:04.409 { 00:20:04.409 "job": "NVMe0n1", 00:20:04.409 "core_mask": "0x4", 00:20:04.409 "workload": "randread", 00:20:04.409 "status": "finished", 00:20:04.409 "queue_depth": 128, 00:20:04.409 "io_size": 4096, 00:20:04.409 "runtime": 8.159266, 00:20:04.409 "iops": 2085.9719489473687, 00:20:04.409 "mibps": 8.148327925575659, 00:20:04.409 "io_failed": 128, 00:20:04.409 "io_timeout": 0, 00:20:04.409 "avg_latency_us": 60796.09841168862, 00:20:04.409 "min_latency_us": 8400.523636363636, 00:20:04.409 "max_latency_us": 7015926.69090909 00:20:04.409 } 00:20:04.409 ], 00:20:04.409 "core_count": 1 00:20:04.409 } 00:20:04.409 10:38:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:04.409 Attaching 5 probes... 00:20:04.409 1332.103638: reset bdev controller NVMe0 00:20:04.409 1332.290444: reconnect bdev controller NVMe0 00:20:04.409 3332.555495: reconnect delay bdev controller NVMe0 00:20:04.409 3332.583591: reconnect bdev controller NVMe0 00:20:04.409 5333.036934: reconnect delay bdev controller NVMe0 00:20:04.409 5333.066127: reconnect bdev controller NVMe0 00:20:04.409 7333.503923: reconnect delay bdev controller NVMe0 00:20:04.409 7333.529244: reconnect bdev controller NVMe0 00:20:04.409 10:38:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:20:04.409 10:38:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:20:04.409 10:38:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 82590 00:20:04.409 10:38:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:04.409 10:38:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 82587 00:20:04.409 10:38:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # '[' -z 82587 ']' 00:20:04.409 10:38:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # kill -0 82587 00:20:04.409 10:38:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # uname 00:20:04.409 10:38:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:04.409 10:38:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 82587 00:20:04.409 10:38:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:04.409 10:38:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:04.409 killing process with pid 82587 00:20:04.409 10:38:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@970 -- # echo 'killing process with pid 82587' 00:20:04.409 Received shutdown signal, test time was about 8.225459 seconds 00:20:04.409 00:20:04.409 Latency(us) 00:20:04.409 [2024-11-15T10:38:05.262Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:04.409 [2024-11-15T10:38:05.262Z] =================================================================================================================== 00:20:04.409 [2024-11-15T10:38:05.262Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:04.409 10:38:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@971 -- # kill 82587 00:20:04.409 10:38:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@976 -- # wait 82587 00:20:04.667 10:38:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:04.928 10:38:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:20:04.928 10:38:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:20:04.928 10:38:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:04.928 10:38:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:20:05.195 10:38:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:05.195 10:38:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:20:05.195 10:38:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:05.195 10:38:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:05.195 rmmod nvme_tcp 00:20:05.195 rmmod nvme_fabrics 00:20:05.195 rmmod nvme_keyring 00:20:05.195 10:38:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:05.195 10:38:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:20:05.195 10:38:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:20:05.195 10:38:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 82148 ']' 00:20:05.195 10:38:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 82148 00:20:05.195 10:38:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # '[' -z 82148 ']' 00:20:05.195 10:38:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # kill -0 82148 00:20:05.195 10:38:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # uname 00:20:05.195 10:38:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:05.195 10:38:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 82148 00:20:05.195 10:38:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:05.195 10:38:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:05.195 killing process with pid 82148 00:20:05.195 10:38:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@970 -- # echo 'killing process with pid 82148' 00:20:05.195 10:38:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@971 -- # kill 82148 00:20:05.195 10:38:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@976 -- # wait 82148 00:20:05.454 10:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:05.454 10:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:05.454 10:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:05.454 10:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:20:05.454 10:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:05.454 10:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:20:05.454 10:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:20:05.454 10:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:05.454 10:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:05.454 10:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:05.454 10:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:05.454 10:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:05.454 10:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:05.454 10:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:05.454 10:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:05.454 10:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:05.454 10:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:05.454 10:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:05.454 10:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:05.454 10:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:05.454 10:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:05.454 10:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:05.714 10:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:05.714 10:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:05.714 10:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:05.714 10:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:05.714 10:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:20:05.714 ************************************ 00:20:05.714 END TEST nvmf_timeout 00:20:05.714 ************************************ 00:20:05.714 00:20:05.714 real 0m46.576s 00:20:05.714 user 2m16.529s 00:20:05.714 sys 0m5.544s 00:20:05.714 10:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:05.714 10:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:05.714 10:38:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:20:05.714 10:38:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:20:05.714 00:20:05.714 real 5m15.387s 00:20:05.714 user 13m43.248s 00:20:05.714 sys 1m10.796s 00:20:05.714 10:38:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:05.714 10:38:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.714 ************************************ 00:20:05.714 END TEST nvmf_host 00:20:05.714 ************************************ 00:20:05.714 10:38:06 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:20:05.714 10:38:06 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:20:05.714 00:20:05.714 real 13m9.103s 00:20:05.714 user 31m43.064s 00:20:05.714 sys 3m12.492s 00:20:05.714 10:38:06 nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:05.714 10:38:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:05.714 ************************************ 00:20:05.714 END TEST nvmf_tcp 00:20:05.714 ************************************ 00:20:05.714 10:38:06 -- spdk/autotest.sh@281 -- # [[ 1 -eq 0 ]] 00:20:05.714 10:38:06 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:20:05.714 10:38:06 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:20:05.714 10:38:06 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:05.714 10:38:06 -- common/autotest_common.sh@10 -- # set +x 00:20:05.714 ************************************ 00:20:05.714 START TEST nvmf_dif 00:20:05.714 ************************************ 00:20:05.714 10:38:06 nvmf_dif -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:20:05.714 * Looking for test storage... 00:20:05.714 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:05.714 10:38:06 nvmf_dif -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:05.714 10:38:06 nvmf_dif -- common/autotest_common.sh@1691 -- # lcov --version 00:20:05.714 10:38:06 nvmf_dif -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:05.973 10:38:06 nvmf_dif -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:05.973 10:38:06 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:05.973 10:38:06 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:05.973 10:38:06 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:05.973 10:38:06 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:20:05.973 10:38:06 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:20:05.973 10:38:06 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:20:05.973 10:38:06 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:20:05.973 10:38:06 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:20:05.973 10:38:06 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:20:05.973 10:38:06 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:20:05.973 10:38:06 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:05.973 10:38:06 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:20:05.973 10:38:06 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:20:05.973 10:38:06 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:05.973 10:38:06 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:05.973 10:38:06 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:20:05.973 10:38:06 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:20:05.973 10:38:06 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:05.973 10:38:06 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:20:05.973 10:38:06 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:20:05.973 10:38:06 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:20:05.973 10:38:06 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:20:05.973 10:38:06 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:05.973 10:38:06 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:20:05.973 10:38:06 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:20:05.973 10:38:06 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:05.973 10:38:06 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:05.973 10:38:06 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:20:05.973 10:38:06 nvmf_dif -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:05.973 10:38:06 nvmf_dif -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:05.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:05.973 --rc genhtml_branch_coverage=1 00:20:05.973 --rc genhtml_function_coverage=1 00:20:05.973 --rc genhtml_legend=1 00:20:05.973 --rc geninfo_all_blocks=1 00:20:05.973 --rc geninfo_unexecuted_blocks=1 00:20:05.973 00:20:05.973 ' 00:20:05.973 10:38:06 nvmf_dif -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:05.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:05.973 --rc genhtml_branch_coverage=1 00:20:05.973 --rc genhtml_function_coverage=1 00:20:05.973 --rc genhtml_legend=1 00:20:05.973 --rc geninfo_all_blocks=1 00:20:05.973 --rc geninfo_unexecuted_blocks=1 00:20:05.973 00:20:05.973 ' 00:20:05.973 10:38:06 nvmf_dif -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:05.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:05.973 --rc genhtml_branch_coverage=1 00:20:05.973 --rc genhtml_function_coverage=1 00:20:05.973 --rc genhtml_legend=1 00:20:05.973 --rc geninfo_all_blocks=1 00:20:05.973 --rc geninfo_unexecuted_blocks=1 00:20:05.973 00:20:05.973 ' 00:20:05.973 10:38:06 nvmf_dif -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:05.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:05.973 --rc genhtml_branch_coverage=1 00:20:05.973 --rc genhtml_function_coverage=1 00:20:05.973 --rc genhtml_legend=1 00:20:05.973 --rc geninfo_all_blocks=1 00:20:05.973 --rc geninfo_unexecuted_blocks=1 00:20:05.973 00:20:05.973 ' 00:20:05.973 10:38:06 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:05.973 10:38:06 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:20:05.973 10:38:06 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:05.973 10:38:06 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:05.973 10:38:06 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:05.973 10:38:06 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:05.973 10:38:06 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:05.973 10:38:06 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:05.973 10:38:06 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:05.973 10:38:06 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:05.973 10:38:06 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:05.973 10:38:06 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:05.973 10:38:06 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:20:05.973 10:38:06 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:20:05.973 10:38:06 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:05.973 10:38:06 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:05.973 10:38:06 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:05.974 10:38:06 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:05.974 10:38:06 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:05.974 10:38:06 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:20:05.974 10:38:06 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:05.974 10:38:06 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:05.974 10:38:06 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:05.974 10:38:06 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.974 10:38:06 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.974 10:38:06 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.974 10:38:06 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:20:05.974 10:38:06 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.974 10:38:06 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:20:05.974 10:38:06 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:05.974 10:38:06 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:05.974 10:38:06 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:05.974 10:38:06 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:05.974 10:38:06 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:05.974 10:38:06 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:05.974 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:05.974 10:38:06 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:05.974 10:38:06 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:05.974 10:38:06 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:05.974 10:38:06 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:20:05.974 10:38:06 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:20:05.974 10:38:06 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:20:05.974 10:38:06 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:20:05.974 10:38:06 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:20:05.974 10:38:06 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:05.974 10:38:06 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:05.974 10:38:06 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:05.974 10:38:06 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:05.974 10:38:06 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:05.974 10:38:06 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:05.974 10:38:06 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:05.974 10:38:06 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:05.974 10:38:06 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:05.974 10:38:06 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:05.974 10:38:06 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:05.974 10:38:06 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:05.974 10:38:06 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:05.974 10:38:06 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:05.974 10:38:06 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:05.974 10:38:06 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:05.974 10:38:06 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:05.974 10:38:06 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:05.974 10:38:06 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:05.974 10:38:06 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:05.974 10:38:06 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:05.974 10:38:06 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:05.974 10:38:06 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:05.974 10:38:06 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:05.974 10:38:06 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:05.974 10:38:06 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:05.974 10:38:06 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:05.974 10:38:06 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:05.974 10:38:06 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:05.974 10:38:06 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:05.974 10:38:06 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:05.974 Cannot find device "nvmf_init_br" 00:20:05.974 10:38:06 nvmf_dif -- nvmf/common.sh@162 -- # true 00:20:05.974 10:38:06 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:05.974 Cannot find device "nvmf_init_br2" 00:20:05.974 10:38:06 nvmf_dif -- nvmf/common.sh@163 -- # true 00:20:05.974 10:38:06 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:05.974 Cannot find device "nvmf_tgt_br" 00:20:05.974 10:38:06 nvmf_dif -- nvmf/common.sh@164 -- # true 00:20:05.974 10:38:06 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:05.974 Cannot find device "nvmf_tgt_br2" 00:20:05.974 10:38:06 nvmf_dif -- nvmf/common.sh@165 -- # true 00:20:05.974 10:38:06 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:05.974 Cannot find device "nvmf_init_br" 00:20:05.974 10:38:06 nvmf_dif -- nvmf/common.sh@166 -- # true 00:20:05.974 10:38:06 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:05.974 Cannot find device "nvmf_init_br2" 00:20:05.974 10:38:06 nvmf_dif -- nvmf/common.sh@167 -- # true 00:20:05.974 10:38:06 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:05.974 Cannot find device "nvmf_tgt_br" 00:20:05.975 10:38:06 nvmf_dif -- nvmf/common.sh@168 -- # true 00:20:05.975 10:38:06 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:05.975 Cannot find device "nvmf_tgt_br2" 00:20:05.975 10:38:06 nvmf_dif -- nvmf/common.sh@169 -- # true 00:20:05.975 10:38:06 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:05.975 Cannot find device "nvmf_br" 00:20:05.975 10:38:06 nvmf_dif -- nvmf/common.sh@170 -- # true 00:20:05.975 10:38:06 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:05.975 Cannot find device "nvmf_init_if" 00:20:05.975 10:38:06 nvmf_dif -- nvmf/common.sh@171 -- # true 00:20:05.975 10:38:06 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:05.975 Cannot find device "nvmf_init_if2" 00:20:05.975 10:38:06 nvmf_dif -- nvmf/common.sh@172 -- # true 00:20:05.975 10:38:06 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:05.975 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:05.975 10:38:06 nvmf_dif -- nvmf/common.sh@173 -- # true 00:20:05.975 10:38:06 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:05.975 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:05.975 10:38:06 nvmf_dif -- nvmf/common.sh@174 -- # true 00:20:05.975 10:38:06 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:06.234 10:38:06 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:06.234 10:38:06 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:06.234 10:38:06 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:06.234 10:38:06 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:06.234 10:38:06 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:06.234 10:38:06 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:06.234 10:38:06 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:06.234 10:38:06 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:06.234 10:38:06 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:06.234 10:38:06 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:06.234 10:38:06 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:06.234 10:38:06 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:06.234 10:38:06 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:06.234 10:38:06 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:06.234 10:38:06 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:06.234 10:38:06 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:06.234 10:38:06 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:06.234 10:38:06 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:06.234 10:38:07 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:06.234 10:38:07 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:06.234 10:38:07 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:06.234 10:38:07 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:06.234 10:38:07 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:06.234 10:38:07 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:06.234 10:38:07 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:06.234 10:38:07 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:06.234 10:38:07 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:06.234 10:38:07 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:06.234 10:38:07 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:06.234 10:38:07 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:06.234 10:38:07 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:06.234 10:38:07 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:06.234 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:06.234 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:20:06.234 00:20:06.234 --- 10.0.0.3 ping statistics --- 00:20:06.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:06.234 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:20:06.234 10:38:07 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:06.234 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:06.234 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:20:06.234 00:20:06.234 --- 10.0.0.4 ping statistics --- 00:20:06.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:06.234 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:20:06.234 10:38:07 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:06.234 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:06.234 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:20:06.234 00:20:06.234 --- 10.0.0.1 ping statistics --- 00:20:06.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:06.234 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:20:06.234 10:38:07 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:06.234 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:06.234 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:20:06.234 00:20:06.234 --- 10.0.0.2 ping statistics --- 00:20:06.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:06.234 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:20:06.234 10:38:07 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:06.234 10:38:07 nvmf_dif -- nvmf/common.sh@461 -- # return 0 00:20:06.234 10:38:07 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:20:06.234 10:38:07 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:06.803 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:06.803 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:06.803 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:06.803 10:38:07 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:06.803 10:38:07 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:06.803 10:38:07 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:06.803 10:38:07 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:06.803 10:38:07 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:06.803 10:38:07 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:06.803 10:38:07 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:20:06.803 10:38:07 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:20:06.803 10:38:07 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:06.803 10:38:07 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:06.803 10:38:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:06.803 10:38:07 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=83131 00:20:06.803 10:38:07 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:06.803 10:38:07 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 83131 00:20:06.803 10:38:07 nvmf_dif -- common/autotest_common.sh@833 -- # '[' -z 83131 ']' 00:20:06.803 10:38:07 nvmf_dif -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:06.803 10:38:07 nvmf_dif -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:06.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:06.803 10:38:07 nvmf_dif -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:06.803 10:38:07 nvmf_dif -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:06.803 10:38:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:06.803 [2024-11-15 10:38:07.546000] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:20:06.803 [2024-11-15 10:38:07.546103] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:07.062 [2024-11-15 10:38:07.712686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.062 [2024-11-15 10:38:07.782071] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:07.062 [2024-11-15 10:38:07.782125] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:07.062 [2024-11-15 10:38:07.782139] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:07.062 [2024-11-15 10:38:07.782149] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:07.062 [2024-11-15 10:38:07.782159] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:07.062 [2024-11-15 10:38:07.782608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:07.062 [2024-11-15 10:38:07.842455] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:07.321 10:38:07 nvmf_dif -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:07.321 10:38:07 nvmf_dif -- common/autotest_common.sh@866 -- # return 0 00:20:07.321 10:38:07 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:07.321 10:38:07 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:07.321 10:38:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:07.321 10:38:07 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:07.321 10:38:07 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:20:07.321 10:38:07 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:20:07.321 10:38:07 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.321 10:38:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:07.321 [2024-11-15 10:38:07.958456] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:07.321 10:38:07 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.321 10:38:07 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:20:07.321 10:38:07 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:20:07.321 10:38:07 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:07.321 10:38:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:07.321 ************************************ 00:20:07.321 START TEST fio_dif_1_default 00:20:07.321 ************************************ 00:20:07.321 10:38:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1127 -- # fio_dif_1 00:20:07.321 10:38:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:20:07.321 10:38:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:20:07.321 10:38:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:20:07.321 10:38:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:20:07.321 10:38:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:20:07.321 10:38:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:07.321 10:38:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.321 10:38:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:07.321 bdev_null0 00:20:07.321 10:38:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.321 10:38:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:07.321 10:38:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.321 10:38:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:07.321 10:38:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.321 10:38:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:07.321 10:38:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.321 10:38:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:07.321 10:38:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.321 10:38:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:07.321 10:38:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.321 10:38:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:07.321 [2024-11-15 10:38:08.002649] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:07.321 10:38:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.321 10:38:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:20:07.321 10:38:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:20:07.321 10:38:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:07.321 10:38:08 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:20:07.321 10:38:08 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:20:07.321 10:38:08 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:07.321 10:38:08 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:07.321 { 00:20:07.321 "params": { 00:20:07.321 "name": "Nvme$subsystem", 00:20:07.321 "trtype": "$TEST_TRANSPORT", 00:20:07.321 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:07.321 "adrfam": "ipv4", 00:20:07.321 "trsvcid": "$NVMF_PORT", 00:20:07.321 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:07.321 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:07.321 "hdgst": ${hdgst:-false}, 00:20:07.321 "ddgst": ${ddgst:-false} 00:20:07.321 }, 00:20:07.321 "method": "bdev_nvme_attach_controller" 00:20:07.321 } 00:20:07.321 EOF 00:20:07.321 )") 00:20:07.321 10:38:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:07.321 10:38:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:20:07.321 10:38:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:20:07.321 10:38:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:07.321 10:38:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:20:07.321 10:38:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:20:07.321 10:38:08 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:20:07.321 10:38:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:07.321 10:38:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local sanitizers 00:20:07.321 10:38:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:07.321 10:38:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # shift 00:20:07.321 10:38:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # local asan_lib= 00:20:07.321 10:38:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:20:07.321 10:38:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:20:07.321 10:38:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:20:07.321 10:38:08 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:20:07.321 10:38:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:20:07.321 10:38:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libasan 00:20:07.322 10:38:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:07.322 10:38:08 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:20:07.322 10:38:08 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:07.322 "params": { 00:20:07.322 "name": "Nvme0", 00:20:07.322 "trtype": "tcp", 00:20:07.322 "traddr": "10.0.0.3", 00:20:07.322 "adrfam": "ipv4", 00:20:07.322 "trsvcid": "4420", 00:20:07.322 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:07.322 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:07.322 "hdgst": false, 00:20:07.322 "ddgst": false 00:20:07.322 }, 00:20:07.322 "method": "bdev_nvme_attach_controller" 00:20:07.322 }' 00:20:07.322 10:38:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:20:07.322 10:38:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:20:07.322 10:38:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:20:07.322 10:38:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:07.322 10:38:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:20:07.322 10:38:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:20:07.322 10:38:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:20:07.322 10:38:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:20:07.322 10:38:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:07.322 10:38:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:07.580 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:07.580 fio-3.35 00:20:07.580 Starting 1 thread 00:20:19.850 00:20:19.850 filename0: (groupid=0, jobs=1): err= 0: pid=83196: Fri Nov 15 10:38:18 2024 00:20:19.850 read: IOPS=8572, BW=33.5MiB/s (35.1MB/s)(335MiB/10001msec) 00:20:19.850 slat (usec): min=6, max=1723, avg= 8.96, stdev= 6.75 00:20:19.850 clat (usec): min=358, max=1926, avg=440.05, stdev=29.16 00:20:19.850 lat (usec): min=365, max=2248, avg=449.01, stdev=30.62 00:20:19.850 clat percentiles (usec): 00:20:19.850 | 1.00th=[ 392], 5.00th=[ 404], 10.00th=[ 412], 20.00th=[ 420], 00:20:19.850 | 30.00th=[ 429], 40.00th=[ 433], 50.00th=[ 437], 60.00th=[ 445], 00:20:19.850 | 70.00th=[ 449], 80.00th=[ 457], 90.00th=[ 469], 95.00th=[ 482], 00:20:19.851 | 99.00th=[ 506], 99.50th=[ 519], 99.90th=[ 586], 99.95th=[ 619], 00:20:19.851 | 99.99th=[ 1631] 00:20:19.851 bw ( KiB/s): min=33056, max=34976, per=100.00%, avg=34307.37, stdev=477.25, samples=19 00:20:19.851 iops : min= 8264, max= 8744, avg=8576.84, stdev=119.31, samples=19 00:20:19.851 lat (usec) : 500=98.49%, 750=1.49%, 1000=0.01% 00:20:19.851 lat (msec) : 2=0.02% 00:20:19.851 cpu : usr=84.12%, sys=13.84%, ctx=117, majf=0, minf=9 00:20:19.851 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:19.851 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.851 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.851 issued rwts: total=85736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:19.851 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:19.851 00:20:19.851 Run status group 0 (all jobs): 00:20:19.851 READ: bw=33.5MiB/s (35.1MB/s), 33.5MiB/s-33.5MiB/s (35.1MB/s-35.1MB/s), io=335MiB (351MB), run=10001-10001msec 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.851 00:20:19.851 real 0m11.094s 00:20:19.851 user 0m9.083s 00:20:19.851 sys 0m1.692s 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:19.851 ************************************ 00:20:19.851 END TEST fio_dif_1_default 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:19.851 ************************************ 00:20:19.851 10:38:19 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:20:19.851 10:38:19 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:20:19.851 10:38:19 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:19.851 10:38:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:19.851 ************************************ 00:20:19.851 START TEST fio_dif_1_multi_subsystems 00:20:19.851 ************************************ 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1127 -- # fio_dif_1_multi_subsystems 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:19.851 bdev_null0 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:19.851 [2024-11-15 10:38:19.150770] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:19.851 bdev_null1 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:19.851 { 00:20:19.851 "params": { 00:20:19.851 "name": "Nvme$subsystem", 00:20:19.851 "trtype": "$TEST_TRANSPORT", 00:20:19.851 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:19.851 "adrfam": "ipv4", 00:20:19.851 "trsvcid": "$NVMF_PORT", 00:20:19.851 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:19.851 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:19.851 "hdgst": ${hdgst:-false}, 00:20:19.851 "ddgst": ${ddgst:-false} 00:20:19.851 }, 00:20:19.851 "method": "bdev_nvme_attach_controller" 00:20:19.851 } 00:20:19.851 EOF 00:20:19.851 )") 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local sanitizers 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # shift 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # local asan_lib= 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libasan 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:20:19.851 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:19.851 { 00:20:19.851 "params": { 00:20:19.851 "name": "Nvme$subsystem", 00:20:19.852 "trtype": "$TEST_TRANSPORT", 00:20:19.852 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:19.852 "adrfam": "ipv4", 00:20:19.852 "trsvcid": "$NVMF_PORT", 00:20:19.852 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:19.852 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:19.852 "hdgst": ${hdgst:-false}, 00:20:19.852 "ddgst": ${ddgst:-false} 00:20:19.852 }, 00:20:19.852 "method": "bdev_nvme_attach_controller" 00:20:19.852 } 00:20:19.852 EOF 00:20:19.852 )") 00:20:19.852 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:20:19.852 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:20:19.852 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:20:19.852 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:20:19.852 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:20:19.852 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:20:19.852 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:20:19.852 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:20:19.852 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:19.852 "params": { 00:20:19.852 "name": "Nvme0", 00:20:19.852 "trtype": "tcp", 00:20:19.852 "traddr": "10.0.0.3", 00:20:19.852 "adrfam": "ipv4", 00:20:19.852 "trsvcid": "4420", 00:20:19.852 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:19.852 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:19.852 "hdgst": false, 00:20:19.852 "ddgst": false 00:20:19.852 }, 00:20:19.852 "method": "bdev_nvme_attach_controller" 00:20:19.852 },{ 00:20:19.852 "params": { 00:20:19.852 "name": "Nvme1", 00:20:19.852 "trtype": "tcp", 00:20:19.852 "traddr": "10.0.0.3", 00:20:19.852 "adrfam": "ipv4", 00:20:19.852 "trsvcid": "4420", 00:20:19.852 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:19.852 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:19.852 "hdgst": false, 00:20:19.852 "ddgst": false 00:20:19.852 }, 00:20:19.852 "method": "bdev_nvme_attach_controller" 00:20:19.852 }' 00:20:19.852 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:20:19.852 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:20:19.852 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:20:19.852 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:19.852 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:20:19.852 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:20:19.852 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:20:19.852 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:20:19.852 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:19.852 10:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:19.852 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:19.852 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:19.852 fio-3.35 00:20:19.852 Starting 2 threads 00:20:29.828 00:20:29.828 filename0: (groupid=0, jobs=1): err= 0: pid=83358: Fri Nov 15 10:38:30 2024 00:20:29.828 read: IOPS=4747, BW=18.5MiB/s (19.4MB/s)(185MiB/10001msec) 00:20:29.828 slat (nsec): min=7004, max=62286, avg=13253.14, stdev=3829.42 00:20:29.828 clat (usec): min=419, max=2807, avg=805.93, stdev=39.27 00:20:29.828 lat (usec): min=427, max=2838, avg=819.19, stdev=39.58 00:20:29.828 clat percentiles (usec): 00:20:29.828 | 1.00th=[ 742], 5.00th=[ 758], 10.00th=[ 766], 20.00th=[ 783], 00:20:29.828 | 30.00th=[ 791], 40.00th=[ 799], 50.00th=[ 807], 60.00th=[ 807], 00:20:29.828 | 70.00th=[ 824], 80.00th=[ 832], 90.00th=[ 848], 95.00th=[ 857], 00:20:29.828 | 99.00th=[ 889], 99.50th=[ 898], 99.90th=[ 930], 99.95th=[ 947], 00:20:29.828 | 99.99th=[ 2573] 00:20:29.828 bw ( KiB/s): min=18688, max=19232, per=50.03%, avg=19001.26, stdev=160.91, samples=19 00:20:29.828 iops : min= 4672, max= 4808, avg=4750.21, stdev=40.18, samples=19 00:20:29.828 lat (usec) : 500=0.01%, 750=2.10%, 1000=97.87% 00:20:29.828 lat (msec) : 2=0.01%, 4=0.02% 00:20:29.828 cpu : usr=89.73%, sys=8.85%, ctx=7, majf=0, minf=0 00:20:29.828 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:29.828 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:29.828 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:29.828 issued rwts: total=47480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:29.828 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:29.828 filename1: (groupid=0, jobs=1): err= 0: pid=83359: Fri Nov 15 10:38:30 2024 00:20:29.828 read: IOPS=4747, BW=18.5MiB/s (19.4MB/s)(185MiB/10001msec) 00:20:29.828 slat (nsec): min=7020, max=60960, avg=13230.07, stdev=3752.27 00:20:29.828 clat (usec): min=429, max=3483, avg=806.57, stdev=53.18 00:20:29.828 lat (usec): min=437, max=3512, avg=819.80, stdev=54.32 00:20:29.828 clat percentiles (usec): 00:20:29.828 | 1.00th=[ 701], 5.00th=[ 725], 10.00th=[ 742], 20.00th=[ 775], 00:20:29.828 | 30.00th=[ 791], 40.00th=[ 799], 50.00th=[ 807], 60.00th=[ 816], 00:20:29.828 | 70.00th=[ 832], 80.00th=[ 840], 90.00th=[ 857], 95.00th=[ 873], 00:20:29.828 | 99.00th=[ 906], 99.50th=[ 922], 99.90th=[ 947], 99.95th=[ 963], 00:20:29.828 | 99.99th=[ 2573] 00:20:29.828 bw ( KiB/s): min=18688, max=19232, per=50.03%, avg=19002.95, stdev=162.91, samples=19 00:20:29.828 iops : min= 4672, max= 4808, avg=4750.74, stdev=40.73, samples=19 00:20:29.828 lat (usec) : 500=0.01%, 750=12.51%, 1000=87.46% 00:20:29.828 lat (msec) : 2=0.01%, 4=0.02% 00:20:29.828 cpu : usr=89.91%, sys=8.75%, ctx=7, majf=0, minf=0 00:20:29.828 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:29.828 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:29.828 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:29.828 issued rwts: total=47480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:29.828 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:29.828 00:20:29.828 Run status group 0 (all jobs): 00:20:29.828 READ: bw=37.1MiB/s (38.9MB/s), 18.5MiB/s-18.5MiB/s (19.4MB/s-19.4MB/s), io=371MiB (389MB), run=10001-10001msec 00:20:29.828 10:38:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:20:29.828 10:38:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:20:29.828 10:38:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:20:29.828 10:38:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:29.828 10:38:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:20:29.828 10:38:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:29.828 10:38:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.828 10:38:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:29.828 10:38:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.828 10:38:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:29.828 10:38:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.828 10:38:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:29.828 10:38:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.828 10:38:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:20:29.828 10:38:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:29.828 10:38:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:20:29.828 10:38:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:29.828 10:38:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.828 10:38:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:29.828 10:38:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.828 10:38:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:29.828 10:38:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.828 10:38:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:29.828 10:38:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.828 00:20:29.828 real 0m11.197s 00:20:29.828 user 0m18.762s 00:20:29.828 sys 0m2.067s 00:20:29.828 10:38:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:29.828 ************************************ 00:20:29.828 10:38:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:29.828 END TEST fio_dif_1_multi_subsystems 00:20:29.828 ************************************ 00:20:29.828 10:38:30 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:20:29.828 10:38:30 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:20:29.828 10:38:30 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:29.828 10:38:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:29.828 ************************************ 00:20:29.828 START TEST fio_dif_rand_params 00:20:29.828 ************************************ 00:20:29.828 10:38:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1127 -- # fio_dif_rand_params 00:20:29.828 10:38:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:20:29.828 10:38:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:20:29.828 10:38:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:20:29.828 10:38:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:20:29.828 10:38:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:20:29.828 10:38:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:20:29.828 10:38:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:20:29.828 10:38:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:20:29.828 10:38:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:29.828 10:38:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:29.828 10:38:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:29.828 10:38:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:29.828 10:38:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:20:29.828 10:38:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.828 10:38:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:29.828 bdev_null0 00:20:29.828 10:38:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.828 10:38:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:29.828 10:38:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.828 10:38:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:29.828 10:38:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.829 10:38:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:29.829 10:38:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.829 10:38:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:29.829 10:38:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.829 10:38:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:29.829 10:38:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.829 10:38:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:29.829 [2024-11-15 10:38:30.400632] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:29.829 10:38:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.829 10:38:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:20:29.829 10:38:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:20:29.829 10:38:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:29.829 10:38:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:20:29.829 10:38:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:20:29.829 10:38:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:29.829 10:38:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:29.829 10:38:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:29.829 { 00:20:29.829 "params": { 00:20:29.829 "name": "Nvme$subsystem", 00:20:29.829 "trtype": "$TEST_TRANSPORT", 00:20:29.829 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:29.829 "adrfam": "ipv4", 00:20:29.829 "trsvcid": "$NVMF_PORT", 00:20:29.829 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:29.829 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:29.829 "hdgst": ${hdgst:-false}, 00:20:29.829 "ddgst": ${ddgst:-false} 00:20:29.829 }, 00:20:29.829 "method": "bdev_nvme_attach_controller" 00:20:29.829 } 00:20:29.829 EOF 00:20:29.829 )") 00:20:29.829 10:38:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:29.829 10:38:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:29.829 10:38:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:29.829 10:38:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:29.829 10:38:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:20:29.829 10:38:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:29.829 10:38:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:20:29.829 10:38:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:29.829 10:38:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:20:29.829 10:38:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:20:29.829 10:38:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:20:29.829 10:38:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:20:29.829 10:38:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:29.829 10:38:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:29.829 10:38:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:29.829 10:38:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:20:29.829 10:38:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:20:29.829 10:38:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:20:29.829 10:38:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:20:29.829 10:38:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:29.829 "params": { 00:20:29.829 "name": "Nvme0", 00:20:29.829 "trtype": "tcp", 00:20:29.829 "traddr": "10.0.0.3", 00:20:29.829 "adrfam": "ipv4", 00:20:29.829 "trsvcid": "4420", 00:20:29.829 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:29.829 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:29.829 "hdgst": false, 00:20:29.829 "ddgst": false 00:20:29.829 }, 00:20:29.829 "method": "bdev_nvme_attach_controller" 00:20:29.829 }' 00:20:29.829 10:38:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:20:29.829 10:38:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:20:29.829 10:38:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:20:29.829 10:38:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:29.829 10:38:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:20:29.829 10:38:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:20:29.829 10:38:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:20:29.829 10:38:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:20:29.829 10:38:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:29.829 10:38:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:29.829 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:20:29.829 ... 00:20:29.829 fio-3.35 00:20:29.829 Starting 3 threads 00:20:36.438 00:20:36.438 filename0: (groupid=0, jobs=1): err= 0: pid=83515: Fri Nov 15 10:38:36 2024 00:20:36.438 read: IOPS=253, BW=31.7MiB/s (33.2MB/s)(159MiB/5008msec) 00:20:36.438 slat (nsec): min=5948, max=45913, avg=11192.21, stdev=4633.71 00:20:36.438 clat (usec): min=9589, max=12345, avg=11810.74, stdev=169.60 00:20:36.438 lat (usec): min=9597, max=12358, avg=11821.93, stdev=169.67 00:20:36.438 clat percentiles (usec): 00:20:36.438 | 1.00th=[11731], 5.00th=[11731], 10.00th=[11731], 20.00th=[11731], 00:20:36.438 | 30.00th=[11731], 40.00th=[11731], 50.00th=[11731], 60.00th=[11731], 00:20:36.438 | 70.00th=[11863], 80.00th=[11863], 90.00th=[11994], 95.00th=[12125], 00:20:36.438 | 99.00th=[12256], 99.50th=[12256], 99.90th=[12387], 99.95th=[12387], 00:20:36.438 | 99.99th=[12387] 00:20:36.438 bw ( KiB/s): min=32256, max=33024, per=33.33%, avg=32409.60, stdev=323.82, samples=10 00:20:36.438 iops : min= 252, max= 258, avg=253.20, stdev= 2.53, samples=10 00:20:36.438 lat (msec) : 10=0.24%, 20=99.76% 00:20:36.438 cpu : usr=90.87%, sys=8.51%, ctx=12, majf=0, minf=0 00:20:36.438 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:36.438 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.438 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.438 issued rwts: total=1269,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:36.438 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:36.438 filename0: (groupid=0, jobs=1): err= 0: pid=83516: Fri Nov 15 10:38:36 2024 00:20:36.438 read: IOPS=253, BW=31.7MiB/s (33.2MB/s)(159MiB/5006msec) 00:20:36.438 slat (nsec): min=7944, max=38667, avg=14726.42, stdev=2795.20 00:20:36.438 clat (usec): min=5225, max=12508, avg=11799.85, stdev=342.19 00:20:36.438 lat (usec): min=5233, max=12534, avg=11814.58, stdev=342.41 00:20:36.438 clat percentiles (usec): 00:20:36.438 | 1.00th=[11731], 5.00th=[11731], 10.00th=[11731], 20.00th=[11731], 00:20:36.438 | 30.00th=[11731], 40.00th=[11731], 50.00th=[11731], 60.00th=[11731], 00:20:36.438 | 70.00th=[11863], 80.00th=[11863], 90.00th=[11994], 95.00th=[12125], 00:20:36.438 | 99.00th=[12256], 99.50th=[12256], 99.90th=[12518], 99.95th=[12518], 00:20:36.438 | 99.99th=[12518] 00:20:36.438 bw ( KiB/s): min=32256, max=33024, per=33.33%, avg=32409.60, stdev=323.82, samples=10 00:20:36.438 iops : min= 252, max= 258, avg=253.20, stdev= 2.53, samples=10 00:20:36.438 lat (msec) : 10=0.24%, 20=99.76% 00:20:36.438 cpu : usr=90.83%, sys=8.59%, ctx=90, majf=0, minf=0 00:20:36.438 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:36.438 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.438 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.438 issued rwts: total=1269,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:36.438 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:36.438 filename0: (groupid=0, jobs=1): err= 0: pid=83517: Fri Nov 15 10:38:36 2024 00:20:36.438 read: IOPS=253, BW=31.6MiB/s (33.2MB/s)(158MiB/5001msec) 00:20:36.438 slat (nsec): min=7970, max=38302, avg=14536.68, stdev=2811.64 00:20:36.438 clat (usec): min=11678, max=12645, avg=11816.45, stdev=124.25 00:20:36.438 lat (usec): min=11690, max=12674, avg=11830.99, stdev=124.27 00:20:36.438 clat percentiles (usec): 00:20:36.438 | 1.00th=[11731], 5.00th=[11731], 10.00th=[11731], 20.00th=[11731], 00:20:36.438 | 30.00th=[11731], 40.00th=[11731], 50.00th=[11731], 60.00th=[11731], 00:20:36.438 | 70.00th=[11863], 80.00th=[11863], 90.00th=[11994], 95.00th=[12125], 00:20:36.438 | 99.00th=[12256], 99.50th=[12256], 99.90th=[12649], 99.95th=[12649], 00:20:36.438 | 99.99th=[12649] 00:20:36.438 bw ( KiB/s): min=32256, max=33024, per=33.35%, avg=32426.67, stdev=338.66, samples=9 00:20:36.438 iops : min= 252, max= 258, avg=253.33, stdev= 2.65, samples=9 00:20:36.439 lat (msec) : 20=100.00% 00:20:36.439 cpu : usr=91.20%, sys=8.16%, ctx=81, majf=0, minf=0 00:20:36.439 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:36.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.439 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.439 issued rwts: total=1266,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:36.439 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:36.439 00:20:36.439 Run status group 0 (all jobs): 00:20:36.439 READ: bw=94.9MiB/s (99.6MB/s), 31.6MiB/s-31.7MiB/s (33.2MB/s-33.2MB/s), io=476MiB (499MB), run=5001-5008msec 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:36.439 bdev_null0 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:36.439 [2024-11-15 10:38:36.475176] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:36.439 bdev_null1 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:36.439 bdev_null2 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:36.439 { 00:20:36.439 "params": { 00:20:36.439 "name": "Nvme$subsystem", 00:20:36.439 "trtype": "$TEST_TRANSPORT", 00:20:36.439 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.439 "adrfam": "ipv4", 00:20:36.439 "trsvcid": "$NVMF_PORT", 00:20:36.439 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.439 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.439 "hdgst": ${hdgst:-false}, 00:20:36.439 "ddgst": ${ddgst:-false} 00:20:36.439 }, 00:20:36.439 "method": "bdev_nvme_attach_controller" 00:20:36.439 } 00:20:36.439 EOF 00:20:36.439 )") 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:20:36.439 10:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:20:36.440 10:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:36.440 10:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:36.440 10:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:36.440 10:38:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:36.440 10:38:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:36.440 { 00:20:36.440 "params": { 00:20:36.440 "name": "Nvme$subsystem", 00:20:36.440 "trtype": "$TEST_TRANSPORT", 00:20:36.440 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.440 "adrfam": "ipv4", 00:20:36.440 "trsvcid": "$NVMF_PORT", 00:20:36.440 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.440 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.440 "hdgst": ${hdgst:-false}, 00:20:36.440 "ddgst": ${ddgst:-false} 00:20:36.440 }, 00:20:36.440 "method": "bdev_nvme_attach_controller" 00:20:36.440 } 00:20:36.440 EOF 00:20:36.440 )") 00:20:36.440 10:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:36.440 10:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:36.440 10:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:36.440 10:38:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:20:36.440 10:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:36.440 10:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:36.440 10:38:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:36.440 10:38:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:36.440 { 00:20:36.440 "params": { 00:20:36.440 "name": "Nvme$subsystem", 00:20:36.440 "trtype": "$TEST_TRANSPORT", 00:20:36.440 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.440 "adrfam": "ipv4", 00:20:36.440 "trsvcid": "$NVMF_PORT", 00:20:36.440 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.440 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.440 "hdgst": ${hdgst:-false}, 00:20:36.440 "ddgst": ${ddgst:-false} 00:20:36.440 }, 00:20:36.440 "method": "bdev_nvme_attach_controller" 00:20:36.440 } 00:20:36.440 EOF 00:20:36.440 )") 00:20:36.440 10:38:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:20:36.440 10:38:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:20:36.440 10:38:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:20:36.440 10:38:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:36.440 "params": { 00:20:36.440 "name": "Nvme0", 00:20:36.440 "trtype": "tcp", 00:20:36.440 "traddr": "10.0.0.3", 00:20:36.440 "adrfam": "ipv4", 00:20:36.440 "trsvcid": "4420", 00:20:36.440 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:36.440 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:36.440 "hdgst": false, 00:20:36.440 "ddgst": false 00:20:36.440 }, 00:20:36.440 "method": "bdev_nvme_attach_controller" 00:20:36.440 },{ 00:20:36.440 "params": { 00:20:36.440 "name": "Nvme1", 00:20:36.440 "trtype": "tcp", 00:20:36.440 "traddr": "10.0.0.3", 00:20:36.440 "adrfam": "ipv4", 00:20:36.440 "trsvcid": "4420", 00:20:36.440 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:36.440 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:36.440 "hdgst": false, 00:20:36.440 "ddgst": false 00:20:36.440 }, 00:20:36.440 "method": "bdev_nvme_attach_controller" 00:20:36.440 },{ 00:20:36.440 "params": { 00:20:36.440 "name": "Nvme2", 00:20:36.440 "trtype": "tcp", 00:20:36.440 "traddr": "10.0.0.3", 00:20:36.440 "adrfam": "ipv4", 00:20:36.440 "trsvcid": "4420", 00:20:36.440 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:36.440 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:36.440 "hdgst": false, 00:20:36.440 "ddgst": false 00:20:36.440 }, 00:20:36.440 "method": "bdev_nvme_attach_controller" 00:20:36.440 }' 00:20:36.440 10:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:20:36.440 10:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:20:36.440 10:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:20:36.440 10:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:20:36.440 10:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:36.440 10:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:20:36.440 10:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:20:36.440 10:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:20:36.440 10:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:36.440 10:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:36.440 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:36.440 ... 00:20:36.440 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:36.440 ... 00:20:36.440 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:36.440 ... 00:20:36.440 fio-3.35 00:20:36.440 Starting 24 threads 00:20:48.665 00:20:48.665 filename0: (groupid=0, jobs=1): err= 0: pid=83613: Fri Nov 15 10:38:47 2024 00:20:48.665 read: IOPS=207, BW=828KiB/s (848kB/s)(8288KiB/10007msec) 00:20:48.665 slat (usec): min=4, max=8024, avg=21.98, stdev=215.59 00:20:48.665 clat (msec): min=9, max=133, avg=77.15, stdev=22.91 00:20:48.665 lat (msec): min=9, max=133, avg=77.17, stdev=22.90 00:20:48.665 clat percentiles (msec): 00:20:48.665 | 1.00th=[ 28], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 58], 00:20:48.665 | 30.00th=[ 66], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 80], 00:20:48.665 | 70.00th=[ 85], 80.00th=[ 100], 90.00th=[ 112], 95.00th=[ 121], 00:20:48.665 | 99.00th=[ 125], 99.50th=[ 132], 99.90th=[ 132], 99.95th=[ 134], 00:20:48.665 | 99.99th=[ 134] 00:20:48.665 bw ( KiB/s): min= 616, max= 1024, per=4.22%, avg=814.74, stdev=129.31, samples=19 00:20:48.665 iops : min= 154, max= 256, avg=203.68, stdev=32.33, samples=19 00:20:48.665 lat (msec) : 10=0.29%, 20=0.14%, 50=13.47%, 100=66.80%, 250=19.31% 00:20:48.665 cpu : usr=36.88%, sys=2.40%, ctx=1112, majf=0, minf=9 00:20:48.665 IO depths : 1=0.1%, 2=1.1%, 4=4.4%, 8=79.3%, 16=15.2%, 32=0.0%, >=64=0.0% 00:20:48.665 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.665 complete : 0=0.0%, 4=87.9%, 8=11.1%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.665 issued rwts: total=2072,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:48.665 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:48.665 filename0: (groupid=0, jobs=1): err= 0: pid=83614: Fri Nov 15 10:38:47 2024 00:20:48.665 read: IOPS=198, BW=793KiB/s (812kB/s)(7944KiB/10016msec) 00:20:48.665 slat (usec): min=6, max=12032, avg=33.35, stdev=365.33 00:20:48.665 clat (msec): min=23, max=142, avg=80.51, stdev=21.30 00:20:48.665 lat (msec): min=23, max=142, avg=80.54, stdev=21.30 00:20:48.665 clat percentiles (msec): 00:20:48.665 | 1.00th=[ 40], 5.00th=[ 49], 10.00th=[ 55], 20.00th=[ 65], 00:20:48.665 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 81], 00:20:48.665 | 70.00th=[ 86], 80.00th=[ 105], 90.00th=[ 114], 95.00th=[ 120], 00:20:48.665 | 99.00th=[ 125], 99.50th=[ 128], 99.90th=[ 142], 99.95th=[ 142], 00:20:48.665 | 99.99th=[ 142] 00:20:48.665 bw ( KiB/s): min= 640, max= 920, per=4.05%, avg=782.37, stdev=102.50, samples=19 00:20:48.665 iops : min= 160, max= 230, avg=195.58, stdev=25.63, samples=19 00:20:48.665 lat (msec) : 50=6.19%, 100=69.79%, 250=24.02% 00:20:48.665 cpu : usr=42.06%, sys=2.39%, ctx=1319, majf=0, minf=9 00:20:48.665 IO depths : 1=0.1%, 2=2.2%, 4=8.8%, 8=74.3%, 16=14.8%, 32=0.0%, >=64=0.0% 00:20:48.665 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.665 complete : 0=0.0%, 4=89.5%, 8=8.6%, 16=1.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.665 issued rwts: total=1986,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:48.665 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:48.665 filename0: (groupid=0, jobs=1): err= 0: pid=83615: Fri Nov 15 10:38:47 2024 00:20:48.665 read: IOPS=183, BW=736KiB/s (754kB/s)(7380KiB/10029msec) 00:20:48.665 slat (usec): min=6, max=8023, avg=26.43, stdev=251.26 00:20:48.665 clat (msec): min=37, max=163, avg=86.73, stdev=24.87 00:20:48.665 lat (msec): min=37, max=163, avg=86.76, stdev=24.87 00:20:48.665 clat percentiles (msec): 00:20:48.665 | 1.00th=[ 44], 5.00th=[ 51], 10.00th=[ 60], 20.00th=[ 67], 00:20:48.665 | 30.00th=[ 71], 40.00th=[ 74], 50.00th=[ 80], 60.00th=[ 87], 00:20:48.665 | 70.00th=[ 104], 80.00th=[ 114], 90.00th=[ 121], 95.00th=[ 125], 00:20:48.665 | 99.00th=[ 153], 99.50th=[ 153], 99.90th=[ 165], 99.95th=[ 165], 00:20:48.665 | 99.99th=[ 165] 00:20:48.665 bw ( KiB/s): min= 507, max= 1024, per=3.79%, avg=731.25, stdev=167.38, samples=20 00:20:48.665 iops : min= 126, max= 256, avg=182.75, stdev=41.87, samples=20 00:20:48.665 lat (msec) : 50=4.17%, 100=64.23%, 250=31.60% 00:20:48.665 cpu : usr=41.67%, sys=2.80%, ctx=1522, majf=0, minf=9 00:20:48.665 IO depths : 1=0.1%, 2=4.3%, 4=17.5%, 8=64.7%, 16=13.5%, 32=0.0%, >=64=0.0% 00:20:48.665 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.665 complete : 0=0.0%, 4=92.0%, 8=4.1%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.665 issued rwts: total=1845,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:48.665 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:48.665 filename0: (groupid=0, jobs=1): err= 0: pid=83616: Fri Nov 15 10:38:47 2024 00:20:48.665 read: IOPS=214, BW=858KiB/s (878kB/s)(8644KiB/10080msec) 00:20:48.665 slat (usec): min=4, max=7022, avg=23.76, stdev=229.27 00:20:48.665 clat (usec): min=1548, max=152957, avg=74337.55, stdev=31985.15 00:20:48.665 lat (usec): min=1558, max=152971, avg=74361.31, stdev=31991.03 00:20:48.665 clat percentiles (usec): 00:20:48.665 | 1.00th=[ 1598], 5.00th=[ 1827], 10.00th=[ 13304], 20.00th=[ 55837], 00:20:48.665 | 30.00th=[ 67634], 40.00th=[ 70779], 50.00th=[ 74974], 60.00th=[ 80217], 00:20:48.665 | 70.00th=[ 89654], 80.00th=[106431], 90.00th=[114820], 95.00th=[120062], 00:20:48.665 | 99.00th=[125305], 99.50th=[133694], 99.90th=[143655], 99.95th=[145753], 00:20:48.665 | 99.99th=[152044] 00:20:48.665 bw ( KiB/s): min= 568, max= 2671, per=4.44%, avg=857.15, stdev=445.56, samples=20 00:20:48.665 iops : min= 142, max= 667, avg=214.25, stdev=111.23, samples=20 00:20:48.665 lat (msec) : 2=5.18%, 4=1.85%, 10=1.94%, 20=2.04%, 50=5.00% 00:20:48.665 lat (msec) : 100=59.93%, 250=24.06% 00:20:48.665 cpu : usr=42.75%, sys=2.60%, ctx=1749, majf=0, minf=3 00:20:48.665 IO depths : 1=0.5%, 2=2.7%, 4=9.0%, 8=73.0%, 16=14.8%, 32=0.0%, >=64=0.0% 00:20:48.666 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.666 complete : 0=0.0%, 4=89.9%, 8=8.1%, 16=2.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.666 issued rwts: total=2161,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:48.666 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:48.666 filename0: (groupid=0, jobs=1): err= 0: pid=83617: Fri Nov 15 10:38:47 2024 00:20:48.666 read: IOPS=206, BW=825KiB/s (845kB/s)(8288KiB/10047msec) 00:20:48.666 slat (usec): min=8, max=8042, avg=22.87, stdev=249.09 00:20:48.666 clat (msec): min=11, max=155, avg=77.35, stdev=24.98 00:20:48.666 lat (msec): min=11, max=155, avg=77.37, stdev=24.98 00:20:48.666 clat percentiles (msec): 00:20:48.666 | 1.00th=[ 18], 5.00th=[ 41], 10.00th=[ 48], 20.00th=[ 58], 00:20:48.666 | 30.00th=[ 64], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 81], 00:20:48.666 | 70.00th=[ 85], 80.00th=[ 108], 90.00th=[ 118], 95.00th=[ 121], 00:20:48.666 | 99.00th=[ 123], 99.50th=[ 131], 99.90th=[ 144], 99.95th=[ 144], 00:20:48.666 | 99.99th=[ 157] 00:20:48.666 bw ( KiB/s): min= 584, max= 1320, per=4.27%, avg=824.80, stdev=189.38, samples=20 00:20:48.666 iops : min= 146, max= 330, avg=206.20, stdev=47.35, samples=20 00:20:48.666 lat (msec) : 20=1.45%, 50=14.09%, 100=62.07%, 250=22.39% 00:20:48.666 cpu : usr=33.89%, sys=2.33%, ctx=974, majf=0, minf=9 00:20:48.666 IO depths : 1=0.1%, 2=0.3%, 4=1.1%, 8=82.0%, 16=16.5%, 32=0.0%, >=64=0.0% 00:20:48.666 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.666 complete : 0=0.0%, 4=87.7%, 8=12.1%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.666 issued rwts: total=2072,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:48.666 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:48.666 filename0: (groupid=0, jobs=1): err= 0: pid=83618: Fri Nov 15 10:38:47 2024 00:20:48.666 read: IOPS=204, BW=818KiB/s (838kB/s)(8204KiB/10028msec) 00:20:48.666 slat (nsec): min=4949, max=43532, avg=14832.79, stdev=4680.11 00:20:48.666 clat (msec): min=23, max=143, avg=78.09, stdev=23.03 00:20:48.666 lat (msec): min=23, max=143, avg=78.10, stdev=23.03 00:20:48.666 clat percentiles (msec): 00:20:48.666 | 1.00th=[ 33], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 61], 00:20:48.666 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 78], 00:20:48.666 | 70.00th=[ 85], 80.00th=[ 108], 90.00th=[ 117], 95.00th=[ 121], 00:20:48.666 | 99.00th=[ 121], 99.50th=[ 122], 99.90th=[ 132], 99.95th=[ 134], 00:20:48.666 | 99.99th=[ 144] 00:20:48.666 bw ( KiB/s): min= 608, max= 1080, per=4.23%, avg=816.35, stdev=147.22, samples=20 00:20:48.666 iops : min= 152, max= 270, avg=204.00, stdev=36.82, samples=20 00:20:48.666 lat (msec) : 50=11.95%, 100=66.16%, 250=21.89% 00:20:48.666 cpu : usr=31.76%, sys=2.09%, ctx=874, majf=0, minf=9 00:20:48.666 IO depths : 1=0.1%, 2=0.3%, 4=1.3%, 8=82.2%, 16=16.2%, 32=0.0%, >=64=0.0% 00:20:48.666 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.666 complete : 0=0.0%, 4=87.6%, 8=12.1%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.666 issued rwts: total=2051,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:48.666 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:48.666 filename0: (groupid=0, jobs=1): err= 0: pid=83619: Fri Nov 15 10:38:47 2024 00:20:48.666 read: IOPS=204, BW=819KiB/s (839kB/s)(8248KiB/10070msec) 00:20:48.666 slat (usec): min=4, max=4028, avg=16.24, stdev=88.58 00:20:48.666 clat (msec): min=4, max=158, avg=77.90, stdev=26.05 00:20:48.666 lat (msec): min=4, max=158, avg=77.92, stdev=26.05 00:20:48.666 clat percentiles (msec): 00:20:48.666 | 1.00th=[ 7], 5.00th=[ 37], 10.00th=[ 48], 20.00th=[ 61], 00:20:48.666 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 81], 00:20:48.666 | 70.00th=[ 85], 80.00th=[ 107], 90.00th=[ 115], 95.00th=[ 121], 00:20:48.666 | 99.00th=[ 126], 99.50th=[ 128], 99.90th=[ 144], 99.95th=[ 148], 00:20:48.666 | 99.99th=[ 159] 00:20:48.666 bw ( KiB/s): min= 608, max= 1532, per=4.24%, avg=818.20, stdev=211.06, samples=20 00:20:48.666 iops : min= 152, max= 383, avg=204.55, stdev=52.77, samples=20 00:20:48.666 lat (msec) : 10=1.65%, 20=2.91%, 50=7.57%, 100=64.31%, 250=23.57% 00:20:48.666 cpu : usr=42.76%, sys=2.63%, ctx=1330, majf=0, minf=9 00:20:48.666 IO depths : 1=0.1%, 2=1.3%, 4=5.2%, 8=77.6%, 16=15.8%, 32=0.0%, >=64=0.0% 00:20:48.666 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.666 complete : 0=0.0%, 4=88.9%, 8=10.0%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.666 issued rwts: total=2062,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:48.666 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:48.666 filename0: (groupid=0, jobs=1): err= 0: pid=83620: Fri Nov 15 10:38:47 2024 00:20:48.666 read: IOPS=204, BW=819KiB/s (838kB/s)(8196KiB/10012msec) 00:20:48.666 slat (usec): min=5, max=8034, avg=20.16, stdev=198.13 00:20:48.666 clat (msec): min=23, max=141, avg=78.09, stdev=22.52 00:20:48.666 lat (msec): min=23, max=141, avg=78.11, stdev=22.52 00:20:48.666 clat percentiles (msec): 00:20:48.666 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 50], 20.00th=[ 59], 00:20:48.666 | 30.00th=[ 67], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 81], 00:20:48.666 | 70.00th=[ 85], 80.00th=[ 102], 90.00th=[ 112], 95.00th=[ 121], 00:20:48.666 | 99.00th=[ 125], 99.50th=[ 132], 99.90th=[ 136], 99.95th=[ 142], 00:20:48.666 | 99.99th=[ 142] 00:20:48.666 bw ( KiB/s): min= 616, max= 1024, per=4.19%, avg=808.84, stdev=126.46, samples=19 00:20:48.666 iops : min= 154, max= 256, avg=202.21, stdev=31.62, samples=19 00:20:48.666 lat (msec) : 50=10.79%, 100=67.89%, 250=21.33% 00:20:48.666 cpu : usr=37.51%, sys=2.29%, ctx=1276, majf=0, minf=9 00:20:48.666 IO depths : 1=0.1%, 2=1.1%, 4=4.4%, 8=79.1%, 16=15.3%, 32=0.0%, >=64=0.0% 00:20:48.666 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.666 complete : 0=0.0%, 4=88.1%, 8=10.9%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.666 issued rwts: total=2049,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:48.666 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:48.666 filename1: (groupid=0, jobs=1): err= 0: pid=83621: Fri Nov 15 10:38:47 2024 00:20:48.666 read: IOPS=195, BW=781KiB/s (800kB/s)(7836KiB/10029msec) 00:20:48.666 slat (usec): min=4, max=8035, avg=31.75, stdev=361.92 00:20:48.666 clat (msec): min=36, max=140, avg=81.64, stdev=21.58 00:20:48.666 lat (msec): min=36, max=140, avg=81.67, stdev=21.56 00:20:48.666 clat percentiles (msec): 00:20:48.666 | 1.00th=[ 40], 5.00th=[ 48], 10.00th=[ 52], 20.00th=[ 64], 00:20:48.666 | 30.00th=[ 72], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 84], 00:20:48.666 | 70.00th=[ 91], 80.00th=[ 108], 90.00th=[ 117], 95.00th=[ 120], 00:20:48.666 | 99.00th=[ 121], 99.50th=[ 126], 99.90th=[ 128], 99.95th=[ 140], 00:20:48.666 | 99.99th=[ 140] 00:20:48.666 bw ( KiB/s): min= 640, max= 1024, per=4.04%, avg=779.40, stdev=113.13, samples=20 00:20:48.666 iops : min= 160, max= 256, avg=194.80, stdev=28.26, samples=20 00:20:48.666 lat (msec) : 50=8.37%, 100=67.89%, 250=23.74% 00:20:48.666 cpu : usr=33.54%, sys=1.83%, ctx=1001, majf=0, minf=9 00:20:48.666 IO depths : 1=0.1%, 2=1.5%, 4=6.0%, 8=77.2%, 16=15.2%, 32=0.0%, >=64=0.0% 00:20:48.666 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.666 complete : 0=0.0%, 4=88.7%, 8=10.0%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.666 issued rwts: total=1959,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:48.666 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:48.666 filename1: (groupid=0, jobs=1): err= 0: pid=83622: Fri Nov 15 10:38:47 2024 00:20:48.666 read: IOPS=204, BW=820KiB/s (839kB/s)(8244KiB/10059msec) 00:20:48.666 slat (usec): min=4, max=8033, avg=22.14, stdev=216.47 00:20:48.666 clat (msec): min=6, max=153, avg=77.83, stdev=25.39 00:20:48.666 lat (msec): min=6, max=153, avg=77.86, stdev=25.39 00:20:48.666 clat percentiles (msec): 00:20:48.666 | 1.00th=[ 9], 5.00th=[ 45], 10.00th=[ 49], 20.00th=[ 59], 00:20:48.666 | 30.00th=[ 67], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 81], 00:20:48.666 | 70.00th=[ 87], 80.00th=[ 106], 90.00th=[ 116], 95.00th=[ 121], 00:20:48.666 | 99.00th=[ 126], 99.50th=[ 128], 99.90th=[ 144], 99.95th=[ 153], 00:20:48.666 | 99.99th=[ 155] 00:20:48.666 bw ( KiB/s): min= 584, max= 1520, per=4.23%, avg=817.90, stdev=208.84, samples=20 00:20:48.666 iops : min= 146, max= 380, avg=204.45, stdev=52.20, samples=20 00:20:48.666 lat (msec) : 10=1.65%, 20=1.46%, 50=9.56%, 100=64.53%, 250=22.80% 00:20:48.666 cpu : usr=40.44%, sys=2.66%, ctx=1172, majf=0, minf=9 00:20:48.666 IO depths : 1=0.1%, 2=1.4%, 4=5.6%, 8=77.4%, 16=15.6%, 32=0.0%, >=64=0.0% 00:20:48.666 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.666 complete : 0=0.0%, 4=88.8%, 8=10.0%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.666 issued rwts: total=2061,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:48.666 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:48.666 filename1: (groupid=0, jobs=1): err= 0: pid=83623: Fri Nov 15 10:38:47 2024 00:20:48.666 read: IOPS=195, BW=784KiB/s (803kB/s)(7876KiB/10049msec) 00:20:48.666 slat (usec): min=4, max=8027, avg=27.12, stdev=312.52 00:20:48.666 clat (msec): min=14, max=145, avg=81.45, stdev=24.21 00:20:48.666 lat (msec): min=14, max=145, avg=81.47, stdev=24.22 00:20:48.666 clat percentiles (msec): 00:20:48.666 | 1.00th=[ 18], 5.00th=[ 48], 10.00th=[ 51], 20.00th=[ 63], 00:20:48.666 | 30.00th=[ 72], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 84], 00:20:48.666 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 120], 95.00th=[ 121], 00:20:48.666 | 99.00th=[ 131], 99.50th=[ 146], 99.90th=[ 146], 99.95th=[ 146], 00:20:48.666 | 99.99th=[ 146] 00:20:48.666 bw ( KiB/s): min= 584, max= 1264, per=4.05%, avg=781.20, stdev=167.23, samples=20 00:20:48.666 iops : min= 146, max= 316, avg=195.30, stdev=41.81, samples=20 00:20:48.666 lat (msec) : 20=1.63%, 50=8.02%, 100=65.16%, 250=25.19% 00:20:48.666 cpu : usr=31.25%, sys=1.97%, ctx=851, majf=0, minf=9 00:20:48.666 IO depths : 1=0.1%, 2=2.2%, 4=8.5%, 8=74.1%, 16=15.1%, 32=0.0%, >=64=0.0% 00:20:48.666 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.666 complete : 0=0.0%, 4=89.7%, 8=8.5%, 16=1.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.666 issued rwts: total=1969,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:48.666 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:48.666 filename1: (groupid=0, jobs=1): err= 0: pid=83624: Fri Nov 15 10:38:47 2024 00:20:48.666 read: IOPS=204, BW=819KiB/s (839kB/s)(8196KiB/10007msec) 00:20:48.666 slat (usec): min=5, max=8032, avg=30.80, stdev=287.61 00:20:48.666 clat (msec): min=10, max=159, avg=77.99, stdev=23.29 00:20:48.666 lat (msec): min=10, max=159, avg=78.02, stdev=23.28 00:20:48.666 clat percentiles (msec): 00:20:48.666 | 1.00th=[ 33], 5.00th=[ 47], 10.00th=[ 50], 20.00th=[ 61], 00:20:48.666 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 79], 00:20:48.666 | 70.00th=[ 84], 80.00th=[ 105], 90.00th=[ 113], 95.00th=[ 120], 00:20:48.666 | 99.00th=[ 128], 99.50th=[ 148], 99.90th=[ 148], 99.95th=[ 161], 00:20:48.666 | 99.99th=[ 161] 00:20:48.667 bw ( KiB/s): min= 616, max= 1024, per=4.18%, avg=806.32, stdev=126.14, samples=19 00:20:48.667 iops : min= 154, max= 256, avg=201.58, stdev=31.54, samples=19 00:20:48.667 lat (msec) : 20=0.15%, 50=11.13%, 100=66.81%, 250=21.91% 00:20:48.667 cpu : usr=38.00%, sys=2.65%, ctx=1369, majf=0, minf=9 00:20:48.667 IO depths : 1=0.1%, 2=1.3%, 4=5.4%, 8=78.2%, 16=15.0%, 32=0.0%, >=64=0.0% 00:20:48.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.667 complete : 0=0.0%, 4=88.2%, 8=10.6%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.667 issued rwts: total=2049,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:48.667 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:48.667 filename1: (groupid=0, jobs=1): err= 0: pid=83625: Fri Nov 15 10:38:47 2024 00:20:48.667 read: IOPS=206, BW=828KiB/s (847kB/s)(8288KiB/10015msec) 00:20:48.667 slat (usec): min=5, max=4084, avg=19.87, stdev=142.00 00:20:48.667 clat (msec): min=16, max=148, avg=77.21, stdev=22.76 00:20:48.667 lat (msec): min=16, max=148, avg=77.23, stdev=22.76 00:20:48.667 clat percentiles (msec): 00:20:48.667 | 1.00th=[ 34], 5.00th=[ 46], 10.00th=[ 49], 20.00th=[ 58], 00:20:48.667 | 30.00th=[ 66], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 79], 00:20:48.667 | 70.00th=[ 84], 80.00th=[ 101], 90.00th=[ 114], 95.00th=[ 120], 00:20:48.667 | 99.00th=[ 126], 99.50th=[ 127], 99.90th=[ 131], 99.95th=[ 148], 00:20:48.667 | 99.99th=[ 148] 00:20:48.667 bw ( KiB/s): min= 608, max= 1024, per=4.23%, avg=817.26, stdev=132.59, samples=19 00:20:48.667 iops : min= 152, max= 256, avg=204.32, stdev=33.15, samples=19 00:20:48.667 lat (msec) : 20=0.48%, 50=12.07%, 100=67.47%, 250=19.98% 00:20:48.667 cpu : usr=35.16%, sys=2.12%, ctx=1110, majf=0, minf=9 00:20:48.667 IO depths : 1=0.1%, 2=1.2%, 4=4.8%, 8=79.0%, 16=15.0%, 32=0.0%, >=64=0.0% 00:20:48.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.667 complete : 0=0.0%, 4=88.0%, 8=11.0%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.667 issued rwts: total=2072,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:48.667 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:48.667 filename1: (groupid=0, jobs=1): err= 0: pid=83626: Fri Nov 15 10:38:47 2024 00:20:48.667 read: IOPS=203, BW=815KiB/s (835kB/s)(8172KiB/10022msec) 00:20:48.667 slat (usec): min=4, max=9031, avg=22.82, stdev=252.86 00:20:48.667 clat (msec): min=32, max=133, avg=78.33, stdev=21.72 00:20:48.667 lat (msec): min=32, max=133, avg=78.35, stdev=21.72 00:20:48.667 clat percentiles (msec): 00:20:48.667 | 1.00th=[ 39], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 62], 00:20:48.667 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 80], 00:20:48.667 | 70.00th=[ 85], 80.00th=[ 104], 90.00th=[ 114], 95.00th=[ 120], 00:20:48.667 | 99.00th=[ 124], 99.50th=[ 126], 99.90th=[ 129], 99.95th=[ 133], 00:20:48.667 | 99.99th=[ 133] 00:20:48.667 bw ( KiB/s): min= 664, max= 1072, per=4.21%, avg=812.25, stdev=128.45, samples=20 00:20:48.667 iops : min= 166, max= 268, avg=203.05, stdev=32.10, samples=20 00:20:48.667 lat (msec) : 50=10.77%, 100=68.43%, 250=20.80% 00:20:48.667 cpu : usr=32.87%, sys=1.76%, ctx=1093, majf=0, minf=9 00:20:48.667 IO depths : 1=0.1%, 2=1.4%, 4=5.4%, 8=78.1%, 16=15.0%, 32=0.0%, >=64=0.0% 00:20:48.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.667 complete : 0=0.0%, 4=88.3%, 8=10.5%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.667 issued rwts: total=2043,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:48.667 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:48.667 filename1: (groupid=0, jobs=1): err= 0: pid=83627: Fri Nov 15 10:38:47 2024 00:20:48.667 read: IOPS=189, BW=760KiB/s (778kB/s)(7612KiB/10021msec) 00:20:48.667 slat (usec): min=3, max=2484, avg=15.97, stdev=56.89 00:20:48.667 clat (msec): min=38, max=157, avg=84.10, stdev=22.16 00:20:48.667 lat (msec): min=38, max=157, avg=84.11, stdev=22.16 00:20:48.667 clat percentiles (msec): 00:20:48.667 | 1.00th=[ 46], 5.00th=[ 51], 10.00th=[ 62], 20.00th=[ 68], 00:20:48.667 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 78], 60.00th=[ 84], 00:20:48.667 | 70.00th=[ 94], 80.00th=[ 108], 90.00th=[ 117], 95.00th=[ 122], 00:20:48.667 | 99.00th=[ 142], 99.50th=[ 144], 99.90th=[ 157], 99.95th=[ 157], 00:20:48.667 | 99.99th=[ 157] 00:20:48.667 bw ( KiB/s): min= 512, max= 1024, per=3.92%, avg=756.40, stdev=125.91, samples=20 00:20:48.667 iops : min= 128, max= 256, avg=189.05, stdev=31.46, samples=20 00:20:48.667 lat (msec) : 50=4.99%, 100=67.95%, 250=27.06% 00:20:48.667 cpu : usr=40.04%, sys=2.23%, ctx=1273, majf=0, minf=9 00:20:48.667 IO depths : 1=0.1%, 2=3.3%, 4=13.0%, 8=69.7%, 16=14.0%, 32=0.0%, >=64=0.0% 00:20:48.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.667 complete : 0=0.0%, 4=90.6%, 8=6.6%, 16=2.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.667 issued rwts: total=1903,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:48.667 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:48.667 filename1: (groupid=0, jobs=1): err= 0: pid=83628: Fri Nov 15 10:38:47 2024 00:20:48.667 read: IOPS=199, BW=798KiB/s (817kB/s)(8008KiB/10040msec) 00:20:48.667 slat (nsec): min=5172, max=54157, avg=14283.81, stdev=4546.69 00:20:48.667 clat (msec): min=25, max=143, avg=80.07, stdev=22.23 00:20:48.667 lat (msec): min=25, max=143, avg=80.08, stdev=22.23 00:20:48.667 clat percentiles (msec): 00:20:48.667 | 1.00th=[ 37], 5.00th=[ 48], 10.00th=[ 51], 20.00th=[ 61], 00:20:48.667 | 30.00th=[ 72], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 84], 00:20:48.667 | 70.00th=[ 85], 80.00th=[ 108], 90.00th=[ 117], 95.00th=[ 121], 00:20:48.667 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 131], 99.95th=[ 131], 00:20:48.667 | 99.99th=[ 144] 00:20:48.667 bw ( KiB/s): min= 640, max= 1008, per=4.12%, avg=796.95, stdev=132.42, samples=20 00:20:48.667 iops : min= 160, max= 252, avg=199.20, stdev=33.14, samples=20 00:20:48.667 lat (msec) : 50=10.19%, 100=67.73%, 250=22.08% 00:20:48.667 cpu : usr=31.25%, sys=1.94%, ctx=847, majf=0, minf=9 00:20:48.667 IO depths : 1=0.1%, 2=1.0%, 4=4.1%, 8=79.0%, 16=15.7%, 32=0.0%, >=64=0.0% 00:20:48.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.667 complete : 0=0.0%, 4=88.4%, 8=10.7%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.667 issued rwts: total=2002,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:48.667 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:48.667 filename2: (groupid=0, jobs=1): err= 0: pid=83629: Fri Nov 15 10:38:47 2024 00:20:48.667 read: IOPS=209, BW=837KiB/s (857kB/s)(8380KiB/10014msec) 00:20:48.667 slat (usec): min=5, max=4056, avg=18.26, stdev=91.77 00:20:48.667 clat (msec): min=16, max=142, avg=76.37, stdev=22.94 00:20:48.667 lat (msec): min=16, max=142, avg=76.39, stdev=22.94 00:20:48.667 clat percentiles (msec): 00:20:48.667 | 1.00th=[ 34], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 56], 00:20:48.667 | 30.00th=[ 65], 40.00th=[ 70], 50.00th=[ 73], 60.00th=[ 78], 00:20:48.667 | 70.00th=[ 83], 80.00th=[ 101], 90.00th=[ 114], 95.00th=[ 120], 00:20:48.667 | 99.00th=[ 123], 99.50th=[ 125], 99.90th=[ 130], 99.95th=[ 142], 00:20:48.667 | 99.99th=[ 142] 00:20:48.667 bw ( KiB/s): min= 616, max= 1040, per=4.29%, avg=828.26, stdev=144.68, samples=19 00:20:48.667 iops : min= 154, max= 260, avg=207.05, stdev=36.17, samples=19 00:20:48.667 lat (msec) : 20=0.48%, 50=13.70%, 100=66.01%, 250=19.81% 00:20:48.667 cpu : usr=40.92%, sys=2.50%, ctx=1350, majf=0, minf=9 00:20:48.667 IO depths : 1=0.1%, 2=0.9%, 4=3.4%, 8=80.4%, 16=15.3%, 32=0.0%, >=64=0.0% 00:20:48.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.667 complete : 0=0.0%, 4=87.7%, 8=11.6%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.667 issued rwts: total=2095,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:48.667 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:48.667 filename2: (groupid=0, jobs=1): err= 0: pid=83630: Fri Nov 15 10:38:47 2024 00:20:48.667 read: IOPS=177, BW=712KiB/s (729kB/s)(7160KiB/10059msec) 00:20:48.667 slat (usec): min=3, max=3231, avg=15.37, stdev=76.25 00:20:48.667 clat (msec): min=4, max=168, avg=89.63, stdev=31.17 00:20:48.667 lat (msec): min=4, max=168, avg=89.65, stdev=31.17 00:20:48.667 clat percentiles (msec): 00:20:48.667 | 1.00th=[ 8], 5.00th=[ 42], 10.00th=[ 61], 20.00th=[ 66], 00:20:48.667 | 30.00th=[ 72], 40.00th=[ 80], 50.00th=[ 86], 60.00th=[ 96], 00:20:48.667 | 70.00th=[ 108], 80.00th=[ 118], 90.00th=[ 121], 95.00th=[ 144], 00:20:48.667 | 99.00th=[ 157], 99.50th=[ 157], 99.90th=[ 169], 99.95th=[ 169], 00:20:48.667 | 99.99th=[ 169] 00:20:48.667 bw ( KiB/s): min= 496, max= 1532, per=3.67%, avg=709.30, stdev=240.49, samples=20 00:20:48.667 iops : min= 124, max= 383, avg=177.30, stdev=60.12, samples=20 00:20:48.667 lat (msec) : 10=1.79%, 20=2.57%, 50=2.35%, 100=55.59%, 250=37.71% 00:20:48.667 cpu : usr=38.88%, sys=2.51%, ctx=1241, majf=0, minf=9 00:20:48.667 IO depths : 1=0.1%, 2=5.3%, 4=21.3%, 8=60.1%, 16=13.2%, 32=0.0%, >=64=0.0% 00:20:48.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.667 complete : 0=0.0%, 4=93.4%, 8=1.8%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.667 issued rwts: total=1790,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:48.667 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:48.667 filename2: (groupid=0, jobs=1): err= 0: pid=83631: Fri Nov 15 10:38:47 2024 00:20:48.667 read: IOPS=209, BW=837KiB/s (857kB/s)(8408KiB/10047msec) 00:20:48.667 slat (usec): min=3, max=8045, avg=20.85, stdev=195.87 00:20:48.667 clat (msec): min=13, max=144, avg=76.24, stdev=24.83 00:20:48.667 lat (msec): min=13, max=144, avg=76.27, stdev=24.83 00:20:48.667 clat percentiles (msec): 00:20:48.667 | 1.00th=[ 16], 5.00th=[ 40], 10.00th=[ 48], 20.00th=[ 58], 00:20:48.667 | 30.00th=[ 62], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 77], 00:20:48.667 | 70.00th=[ 84], 80.00th=[ 106], 90.00th=[ 114], 95.00th=[ 121], 00:20:48.667 | 99.00th=[ 125], 99.50th=[ 127], 99.90th=[ 134], 99.95th=[ 144], 00:20:48.667 | 99.99th=[ 144] 00:20:48.667 bw ( KiB/s): min= 616, max= 1264, per=4.33%, avg=836.80, stdev=179.27, samples=20 00:20:48.667 iops : min= 154, max= 316, avg=209.20, stdev=44.82, samples=20 00:20:48.667 lat (msec) : 20=2.19%, 50=14.94%, 100=61.80%, 250=21.08% 00:20:48.667 cpu : usr=33.83%, sys=2.00%, ctx=932, majf=0, minf=9 00:20:48.667 IO depths : 1=0.1%, 2=0.7%, 4=2.6%, 8=80.8%, 16=15.8%, 32=0.0%, >=64=0.0% 00:20:48.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.667 complete : 0=0.0%, 4=87.8%, 8=11.6%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.667 issued rwts: total=2102,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:48.667 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:48.667 filename2: (groupid=0, jobs=1): err= 0: pid=83632: Fri Nov 15 10:38:47 2024 00:20:48.667 read: IOPS=208, BW=835KiB/s (855kB/s)(8372KiB/10032msec) 00:20:48.667 slat (usec): min=4, max=12022, avg=20.40, stdev=262.52 00:20:48.668 clat (msec): min=22, max=155, avg=76.57, stdev=22.87 00:20:48.668 lat (msec): min=22, max=155, avg=76.59, stdev=22.87 00:20:48.668 clat percentiles (msec): 00:20:48.668 | 1.00th=[ 34], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 58], 00:20:48.668 | 30.00th=[ 65], 40.00th=[ 71], 50.00th=[ 73], 60.00th=[ 77], 00:20:48.668 | 70.00th=[ 84], 80.00th=[ 102], 90.00th=[ 114], 95.00th=[ 120], 00:20:48.668 | 99.00th=[ 123], 99.50th=[ 127], 99.90th=[ 133], 99.95th=[ 133], 00:20:48.668 | 99.99th=[ 155] 00:20:48.668 bw ( KiB/s): min= 632, max= 1128, per=4.30%, avg=830.10, stdev=146.31, samples=20 00:20:48.668 iops : min= 158, max= 282, avg=207.50, stdev=36.61, samples=20 00:20:48.668 lat (msec) : 50=13.66%, 100=66.17%, 250=20.16% 00:20:48.668 cpu : usr=41.26%, sys=2.51%, ctx=1075, majf=0, minf=9 00:20:48.668 IO depths : 1=0.1%, 2=0.3%, 4=0.9%, 8=82.8%, 16=15.9%, 32=0.0%, >=64=0.0% 00:20:48.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.668 complete : 0=0.0%, 4=87.1%, 8=12.7%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.668 issued rwts: total=2093,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:48.668 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:48.668 filename2: (groupid=0, jobs=1): err= 0: pid=83633: Fri Nov 15 10:38:47 2024 00:20:48.668 read: IOPS=204, BW=816KiB/s (836kB/s)(8184KiB/10029msec) 00:20:48.668 slat (usec): min=5, max=8028, avg=35.48, stdev=354.87 00:20:48.668 clat (msec): min=35, max=143, avg=78.22, stdev=21.94 00:20:48.668 lat (msec): min=35, max=143, avg=78.26, stdev=21.93 00:20:48.668 clat percentiles (msec): 00:20:48.668 | 1.00th=[ 39], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 61], 00:20:48.668 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 80], 00:20:48.668 | 70.00th=[ 85], 80.00th=[ 105], 90.00th=[ 114], 95.00th=[ 120], 00:20:48.668 | 99.00th=[ 122], 99.50th=[ 124], 99.90th=[ 131], 99.95th=[ 131], 00:20:48.668 | 99.99th=[ 144] 00:20:48.668 bw ( KiB/s): min= 640, max= 998, per=4.20%, avg=811.60, stdev=125.64, samples=20 00:20:48.668 iops : min= 160, max= 249, avg=202.85, stdev=31.39, samples=20 00:20:48.668 lat (msec) : 50=11.19%, 100=67.74%, 250=21.07% 00:20:48.668 cpu : usr=36.95%, sys=2.48%, ctx=1048, majf=0, minf=9 00:20:48.668 IO depths : 1=0.1%, 2=0.8%, 4=3.3%, 8=80.1%, 16=15.7%, 32=0.0%, >=64=0.0% 00:20:48.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.668 complete : 0=0.0%, 4=88.0%, 8=11.3%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.668 issued rwts: total=2046,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:48.668 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:48.668 filename2: (groupid=0, jobs=1): err= 0: pid=83634: Fri Nov 15 10:38:47 2024 00:20:48.668 read: IOPS=198, BW=795KiB/s (814kB/s)(7968KiB/10022msec) 00:20:48.668 slat (usec): min=4, max=8024, avg=19.31, stdev=179.53 00:20:48.668 clat (msec): min=22, max=149, avg=80.39, stdev=22.23 00:20:48.668 lat (msec): min=22, max=149, avg=80.41, stdev=22.23 00:20:48.668 clat percentiles (msec): 00:20:48.668 | 1.00th=[ 45], 5.00th=[ 48], 10.00th=[ 51], 20.00th=[ 62], 00:20:48.668 | 30.00th=[ 72], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 82], 00:20:48.668 | 70.00th=[ 86], 80.00th=[ 108], 90.00th=[ 118], 95.00th=[ 121], 00:20:48.668 | 99.00th=[ 121], 99.50th=[ 122], 99.90th=[ 150], 99.95th=[ 150], 00:20:48.668 | 99.99th=[ 150] 00:20:48.668 bw ( KiB/s): min= 632, max= 1000, per=4.09%, avg=790.40, stdev=121.29, samples=20 00:20:48.668 iops : min= 158, max= 250, avg=197.60, stdev=30.32, samples=20 00:20:48.668 lat (msec) : 50=9.84%, 100=66.87%, 250=23.29% 00:20:48.668 cpu : usr=31.24%, sys=2.10%, ctx=845, majf=0, minf=9 00:20:48.668 IO depths : 1=0.1%, 2=2.0%, 4=7.8%, 8=75.4%, 16=14.8%, 32=0.0%, >=64=0.0% 00:20:48.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.668 complete : 0=0.0%, 4=89.1%, 8=9.2%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.668 issued rwts: total=1992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:48.668 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:48.668 filename2: (groupid=0, jobs=1): err= 0: pid=83635: Fri Nov 15 10:38:47 2024 00:20:48.668 read: IOPS=204, BW=818KiB/s (838kB/s)(8188KiB/10011msec) 00:20:48.668 slat (usec): min=4, max=8025, avg=18.51, stdev=177.13 00:20:48.668 clat (msec): min=20, max=152, avg=78.16, stdev=22.52 00:20:48.668 lat (msec): min=20, max=152, avg=78.17, stdev=22.53 00:20:48.668 clat percentiles (msec): 00:20:48.668 | 1.00th=[ 37], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 61], 00:20:48.668 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 78], 00:20:48.668 | 70.00th=[ 83], 80.00th=[ 104], 90.00th=[ 115], 95.00th=[ 120], 00:20:48.668 | 99.00th=[ 129], 99.50th=[ 129], 99.90th=[ 142], 99.95th=[ 153], 00:20:48.668 | 99.99th=[ 153] 00:20:48.668 bw ( KiB/s): min= 616, max= 1000, per=4.19%, avg=808.05, stdev=123.72, samples=19 00:20:48.668 iops : min= 154, max= 250, avg=202.00, stdev=30.93, samples=19 00:20:48.668 lat (msec) : 50=10.50%, 100=67.42%, 250=22.08% 00:20:48.668 cpu : usr=31.71%, sys=2.11%, ctx=992, majf=0, minf=9 00:20:48.668 IO depths : 1=0.1%, 2=1.6%, 4=6.2%, 8=77.2%, 16=14.9%, 32=0.0%, >=64=0.0% 00:20:48.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.668 complete : 0=0.0%, 4=88.6%, 8=10.1%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.668 issued rwts: total=2047,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:48.668 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:48.668 filename2: (groupid=0, jobs=1): err= 0: pid=83636: Fri Nov 15 10:38:47 2024 00:20:48.668 read: IOPS=210, BW=842KiB/s (862kB/s)(8456KiB/10041msec) 00:20:48.668 slat (usec): min=5, max=8033, avg=29.96, stdev=324.90 00:20:48.668 clat (msec): min=16, max=145, avg=75.78, stdev=23.91 00:20:48.668 lat (msec): min=16, max=145, avg=75.81, stdev=23.91 00:20:48.668 clat percentiles (msec): 00:20:48.668 | 1.00th=[ 27], 5.00th=[ 41], 10.00th=[ 48], 20.00th=[ 56], 00:20:48.668 | 30.00th=[ 62], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 78], 00:20:48.668 | 70.00th=[ 83], 80.00th=[ 105], 90.00th=[ 114], 95.00th=[ 120], 00:20:48.668 | 99.00th=[ 124], 99.50th=[ 124], 99.90th=[ 142], 99.95th=[ 142], 00:20:48.668 | 99.99th=[ 146] 00:20:48.668 bw ( KiB/s): min= 608, max= 1128, per=4.35%, avg=839.20, stdev=173.45, samples=20 00:20:48.668 iops : min= 152, max= 282, avg=209.80, stdev=43.36, samples=20 00:20:48.668 lat (msec) : 20=0.28%, 50=15.99%, 100=63.06%, 250=20.67% 00:20:48.668 cpu : usr=38.99%, sys=2.43%, ctx=1124, majf=0, minf=9 00:20:48.668 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.1%, 16=16.2%, 32=0.0%, >=64=0.0% 00:20:48.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.668 complete : 0=0.0%, 4=87.2%, 8=12.7%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.668 issued rwts: total=2114,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:48.668 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:48.668 00:20:48.668 Run status group 0 (all jobs): 00:20:48.668 READ: bw=18.8MiB/s (19.8MB/s), 712KiB/s-858KiB/s (729kB/s-878kB/s), io=190MiB (199MB), run=10007-10080msec 00:20:48.668 10:38:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:20:48.668 10:38:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:48.668 10:38:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:48.668 10:38:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:48.668 10:38:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:48.668 10:38:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:48.668 10:38:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.668 10:38:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:48.668 10:38:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.668 10:38:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:48.668 10:38:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.668 10:38:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:48.668 10:38:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.668 10:38:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:48.668 10:38:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:48.668 10:38:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:20:48.668 10:38:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:48.668 10:38:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.668 10:38:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:48.668 10:38:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.668 10:38:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:48.668 10:38:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.668 10:38:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:48.668 10:38:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.668 10:38:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:48.668 10:38:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:20:48.668 10:38:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:20:48.668 10:38:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:48.668 10:38:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.668 10:38:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:48.668 10:38:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.668 10:38:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:20:48.668 10:38:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.668 10:38:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:48.668 10:38:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.668 10:38:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:20:48.668 10:38:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:20:48.668 10:38:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:20:48.668 10:38:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:20:48.668 10:38:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:20:48.668 10:38:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:20:48.668 10:38:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:48.669 bdev_null0 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:48.669 [2024-11-15 10:38:47.939018] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:48.669 bdev_null1 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:48.669 { 00:20:48.669 "params": { 00:20:48.669 "name": "Nvme$subsystem", 00:20:48.669 "trtype": "$TEST_TRANSPORT", 00:20:48.669 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:48.669 "adrfam": "ipv4", 00:20:48.669 "trsvcid": "$NVMF_PORT", 00:20:48.669 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:48.669 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:48.669 "hdgst": ${hdgst:-false}, 00:20:48.669 "ddgst": ${ddgst:-false} 00:20:48.669 }, 00:20:48.669 "method": "bdev_nvme_attach_controller" 00:20:48.669 } 00:20:48.669 EOF 00:20:48.669 )") 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:48.669 { 00:20:48.669 "params": { 00:20:48.669 "name": "Nvme$subsystem", 00:20:48.669 "trtype": "$TEST_TRANSPORT", 00:20:48.669 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:48.669 "adrfam": "ipv4", 00:20:48.669 "trsvcid": "$NVMF_PORT", 00:20:48.669 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:48.669 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:48.669 "hdgst": ${hdgst:-false}, 00:20:48.669 "ddgst": ${ddgst:-false} 00:20:48.669 }, 00:20:48.669 "method": "bdev_nvme_attach_controller" 00:20:48.669 } 00:20:48.669 EOF 00:20:48.669 )") 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:20:48.669 10:38:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:48.669 "params": { 00:20:48.669 "name": "Nvme0", 00:20:48.669 "trtype": "tcp", 00:20:48.669 "traddr": "10.0.0.3", 00:20:48.669 "adrfam": "ipv4", 00:20:48.669 "trsvcid": "4420", 00:20:48.669 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:48.669 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:48.669 "hdgst": false, 00:20:48.669 "ddgst": false 00:20:48.669 }, 00:20:48.669 "method": "bdev_nvme_attach_controller" 00:20:48.669 },{ 00:20:48.669 "params": { 00:20:48.669 "name": "Nvme1", 00:20:48.669 "trtype": "tcp", 00:20:48.669 "traddr": "10.0.0.3", 00:20:48.669 "adrfam": "ipv4", 00:20:48.669 "trsvcid": "4420", 00:20:48.669 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:48.669 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:48.669 "hdgst": false, 00:20:48.669 "ddgst": false 00:20:48.669 }, 00:20:48.669 "method": "bdev_nvme_attach_controller" 00:20:48.669 }' 00:20:48.669 10:38:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:20:48.669 10:38:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:20:48.669 10:38:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:20:48.669 10:38:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:48.669 10:38:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:20:48.669 10:38:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:20:48.669 10:38:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:20:48.669 10:38:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:20:48.669 10:38:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:48.669 10:38:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:48.669 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:20:48.669 ... 00:20:48.669 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:20:48.669 ... 00:20:48.669 fio-3.35 00:20:48.669 Starting 4 threads 00:20:54.042 00:20:54.042 filename0: (groupid=0, jobs=1): err= 0: pid=83774: Fri Nov 15 10:38:53 2024 00:20:54.042 read: IOPS=2205, BW=17.2MiB/s (18.1MB/s)(86.2MiB/5002msec) 00:20:54.042 slat (nsec): min=7470, max=57100, avg=11572.73, stdev=3886.70 00:20:54.042 clat (usec): min=663, max=7538, avg=3593.85, stdev=1014.19 00:20:54.042 lat (usec): min=671, max=7552, avg=3605.42, stdev=1014.56 00:20:54.042 clat percentiles (usec): 00:20:54.042 | 1.00th=[ 1385], 5.00th=[ 1434], 10.00th=[ 1467], 20.00th=[ 2999], 00:20:54.042 | 30.00th=[ 3359], 40.00th=[ 3654], 50.00th=[ 3884], 60.00th=[ 3949], 00:20:54.042 | 70.00th=[ 4047], 80.00th=[ 4178], 90.00th=[ 4621], 95.00th=[ 5211], 00:20:54.042 | 99.00th=[ 5669], 99.50th=[ 5932], 99.90th=[ 6587], 99.95th=[ 7046], 00:20:54.042 | 99.99th=[ 7308] 00:20:54.042 bw ( KiB/s): min=15536, max=20384, per=26.96%, avg=17338.67, stdev=1827.87, samples=9 00:20:54.042 iops : min= 1942, max= 2548, avg=2167.33, stdev=228.48, samples=9 00:20:54.042 lat (usec) : 750=0.09%, 1000=0.07% 00:20:54.042 lat (msec) : 2=12.24%, 4=55.28%, 10=32.32% 00:20:54.042 cpu : usr=91.44%, sys=7.66%, ctx=8, majf=0, minf=0 00:20:54.042 IO depths : 1=0.1%, 2=6.3%, 4=62.4%, 8=31.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:54.042 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:54.042 complete : 0=0.0%, 4=97.6%, 8=2.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:54.042 issued rwts: total=11031,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:54.042 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:54.042 filename0: (groupid=0, jobs=1): err= 0: pid=83775: Fri Nov 15 10:38:53 2024 00:20:54.042 read: IOPS=1967, BW=15.4MiB/s (16.1MB/s)(76.9MiB/5002msec) 00:20:54.042 slat (nsec): min=4990, max=58824, avg=15533.61, stdev=4022.07 00:20:54.042 clat (usec): min=1291, max=8035, avg=4014.32, stdev=673.90 00:20:54.042 lat (usec): min=1305, max=8049, avg=4029.85, stdev=674.05 00:20:54.042 clat percentiles (usec): 00:20:54.042 | 1.00th=[ 1647], 5.00th=[ 3032], 10.00th=[ 3326], 20.00th=[ 3490], 00:20:54.042 | 30.00th=[ 3884], 40.00th=[ 3949], 50.00th=[ 4015], 60.00th=[ 4228], 00:20:54.042 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4817], 95.00th=[ 5211], 00:20:54.042 | 99.00th=[ 5735], 99.50th=[ 5932], 99.90th=[ 7111], 99.95th=[ 7177], 00:20:54.042 | 99.99th=[ 8029] 00:20:54.042 bw ( KiB/s): min=14464, max=16624, per=24.46%, avg=15731.20, stdev=781.87, samples=10 00:20:54.042 iops : min= 1808, max= 2078, avg=1966.40, stdev=97.73, samples=10 00:20:54.042 lat (msec) : 2=1.86%, 4=47.25%, 10=50.89% 00:20:54.042 cpu : usr=92.06%, sys=7.04%, ctx=135, majf=0, minf=10 00:20:54.042 IO depths : 1=0.1%, 2=14.9%, 4=58.0%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:54.042 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:54.042 complete : 0=0.0%, 4=94.2%, 8=5.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:54.042 issued rwts: total=9840,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:54.042 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:54.042 filename1: (groupid=0, jobs=1): err= 0: pid=83776: Fri Nov 15 10:38:53 2024 00:20:54.042 read: IOPS=1922, BW=15.0MiB/s (15.8MB/s)(75.1MiB/5002msec) 00:20:54.042 slat (usec): min=7, max=124, avg=15.33, stdev= 4.21 00:20:54.042 clat (usec): min=1292, max=8035, avg=4107.12, stdev=686.87 00:20:54.042 lat (usec): min=1305, max=8049, avg=4122.45, stdev=686.43 00:20:54.042 clat percentiles (usec): 00:20:54.042 | 1.00th=[ 1926], 5.00th=[ 3326], 10.00th=[ 3359], 20.00th=[ 3752], 00:20:54.042 | 30.00th=[ 3884], 40.00th=[ 3949], 50.00th=[ 4146], 60.00th=[ 4228], 00:20:54.042 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4948], 95.00th=[ 5276], 00:20:54.042 | 99.00th=[ 6325], 99.50th=[ 6587], 99.90th=[ 7439], 99.95th=[ 7504], 00:20:54.042 | 99.99th=[ 8029] 00:20:54.042 bw ( KiB/s): min=13584, max=16624, per=23.91%, avg=15378.80, stdev=953.72, samples=10 00:20:54.042 iops : min= 1698, max= 2078, avg=1922.30, stdev=119.27, samples=10 00:20:54.042 lat (msec) : 2=1.04%, 4=44.84%, 10=54.12% 00:20:54.043 cpu : usr=91.70%, sys=7.22%, ctx=37, majf=0, minf=9 00:20:54.043 IO depths : 1=0.1%, 2=16.3%, 4=57.0%, 8=26.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:54.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:54.043 complete : 0=0.0%, 4=93.6%, 8=6.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:54.043 issued rwts: total=9618,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:54.043 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:54.043 filename1: (groupid=0, jobs=1): err= 0: pid=83777: Fri Nov 15 10:38:53 2024 00:20:54.043 read: IOPS=1943, BW=15.2MiB/s (15.9MB/s)(75.9MiB/5001msec) 00:20:54.043 slat (nsec): min=7700, max=52944, avg=14796.06, stdev=3898.58 00:20:54.043 clat (usec): min=1015, max=8031, avg=4066.36, stdev=655.01 00:20:54.043 lat (usec): min=1028, max=8044, avg=4081.16, stdev=655.24 00:20:54.043 clat percentiles (usec): 00:20:54.043 | 1.00th=[ 2008], 5.00th=[ 3294], 10.00th=[ 3359], 20.00th=[ 3687], 00:20:54.043 | 30.00th=[ 3884], 40.00th=[ 3949], 50.00th=[ 4080], 60.00th=[ 4228], 00:20:54.043 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4817], 95.00th=[ 5211], 00:20:54.043 | 99.00th=[ 5997], 99.50th=[ 6194], 99.90th=[ 6587], 99.95th=[ 7373], 00:20:54.043 | 99.99th=[ 8029] 00:20:54.043 bw ( KiB/s): min=14592, max=16560, per=24.28%, avg=15612.44, stdev=672.13, samples=9 00:20:54.043 iops : min= 1824, max= 2070, avg=1951.56, stdev=84.02, samples=9 00:20:54.043 lat (msec) : 2=1.00%, 4=46.10%, 10=52.91% 00:20:54.043 cpu : usr=91.48%, sys=7.70%, ctx=11, majf=0, minf=0 00:20:54.043 IO depths : 1=0.1%, 2=15.7%, 4=57.5%, 8=26.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:54.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:54.043 complete : 0=0.0%, 4=93.8%, 8=6.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:54.043 issued rwts: total=9719,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:54.043 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:54.043 00:20:54.043 Run status group 0 (all jobs): 00:20:54.043 READ: bw=62.8MiB/s (65.8MB/s), 15.0MiB/s-17.2MiB/s (15.8MB/s-18.1MB/s), io=314MiB (329MB), run=5001-5002msec 00:20:54.043 10:38:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:20:54.043 10:38:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:54.043 10:38:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:54.043 10:38:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:54.043 10:38:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:54.043 10:38:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:54.043 10:38:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.043 10:38:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:54.043 10:38:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.043 10:38:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:54.043 10:38:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.043 10:38:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:54.043 10:38:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.043 10:38:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:54.043 10:38:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:54.043 10:38:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:20:54.043 10:38:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:54.043 10:38:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.043 10:38:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:54.043 10:38:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.043 10:38:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:54.043 10:38:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.043 10:38:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:54.043 10:38:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.043 00:20:54.043 real 0m23.731s 00:20:54.043 user 2m3.366s 00:20:54.043 sys 0m9.251s 00:20:54.043 10:38:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:54.043 10:38:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:54.043 ************************************ 00:20:54.043 END TEST fio_dif_rand_params 00:20:54.043 ************************************ 00:20:54.043 10:38:54 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:20:54.043 10:38:54 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:20:54.043 10:38:54 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:54.043 10:38:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:54.043 ************************************ 00:20:54.043 START TEST fio_dif_digest 00:20:54.043 ************************************ 00:20:54.043 10:38:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1127 -- # fio_dif_digest 00:20:54.043 10:38:54 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:20:54.043 10:38:54 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:20:54.043 10:38:54 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:20:54.043 10:38:54 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:20:54.043 10:38:54 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:20:54.043 10:38:54 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:20:54.043 10:38:54 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:20:54.043 10:38:54 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:20:54.043 10:38:54 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:20:54.043 10:38:54 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:20:54.043 10:38:54 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:20:54.043 10:38:54 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:20:54.043 10:38:54 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:20:54.043 10:38:54 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:20:54.043 10:38:54 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:20:54.043 10:38:54 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:20:54.043 10:38:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.043 10:38:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:54.043 bdev_null0 00:20:54.043 10:38:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.043 10:38:54 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:54.043 10:38:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.043 10:38:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:54.043 10:38:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.043 10:38:54 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:54.043 10:38:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.043 10:38:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:54.043 10:38:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.043 10:38:54 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:54.043 10:38:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.043 10:38:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:54.043 [2024-11-15 10:38:54.181548] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:54.043 10:38:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.043 10:38:54 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:20:54.043 10:38:54 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:20:54.043 10:38:54 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:54.043 10:38:54 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:54.043 10:38:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:20:54.043 10:38:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:54.043 10:38:54 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:20:54.043 10:38:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:20:54.043 10:38:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:20:54.043 10:38:54 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:20:54.043 10:38:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:54.043 10:38:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:54.043 10:38:54 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:20:54.043 10:38:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local sanitizers 00:20:54.043 10:38:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:54.043 { 00:20:54.043 "params": { 00:20:54.043 "name": "Nvme$subsystem", 00:20:54.043 "trtype": "$TEST_TRANSPORT", 00:20:54.043 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:54.043 "adrfam": "ipv4", 00:20:54.043 "trsvcid": "$NVMF_PORT", 00:20:54.043 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:54.043 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:54.043 "hdgst": ${hdgst:-false}, 00:20:54.043 "ddgst": ${ddgst:-false} 00:20:54.043 }, 00:20:54.043 "method": "bdev_nvme_attach_controller" 00:20:54.043 } 00:20:54.043 EOF 00:20:54.043 )") 00:20:54.043 10:38:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:54.043 10:38:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # shift 00:20:54.043 10:38:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # local asan_lib= 00:20:54.043 10:38:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:20:54.043 10:38:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:20:54.043 10:38:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:20:54.043 10:38:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:54.043 10:38:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libasan 00:20:54.043 10:38:54 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:20:54.043 10:38:54 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:20:54.043 10:38:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:20:54.043 10:38:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:20:54.043 10:38:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:54.043 "params": { 00:20:54.043 "name": "Nvme0", 00:20:54.043 "trtype": "tcp", 00:20:54.043 "traddr": "10.0.0.3", 00:20:54.043 "adrfam": "ipv4", 00:20:54.043 "trsvcid": "4420", 00:20:54.043 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:54.043 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:54.043 "hdgst": true, 00:20:54.043 "ddgst": true 00:20:54.043 }, 00:20:54.043 "method": "bdev_nvme_attach_controller" 00:20:54.043 }' 00:20:54.043 10:38:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:20:54.043 10:38:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:20:54.043 10:38:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:20:54.043 10:38:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:54.043 10:38:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:20:54.043 10:38:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:20:54.043 10:38:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:20:54.043 10:38:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:20:54.043 10:38:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:54.043 10:38:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:54.043 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:20:54.043 ... 00:20:54.043 fio-3.35 00:20:54.043 Starting 3 threads 00:21:06.256 00:21:06.256 filename0: (groupid=0, jobs=1): err= 0: pid=83883: Fri Nov 15 10:39:04 2024 00:21:06.256 read: IOPS=220, BW=27.6MiB/s (29.0MB/s)(276MiB/10005msec) 00:21:06.256 slat (usec): min=7, max=102, avg=11.53, stdev= 4.27 00:21:06.256 clat (usec): min=7200, max=15805, avg=13546.58, stdev=343.38 00:21:06.256 lat (usec): min=7208, max=15818, avg=13558.11, stdev=343.37 00:21:06.256 clat percentiles (usec): 00:21:06.256 | 1.00th=[13304], 5.00th=[13435], 10.00th=[13435], 20.00th=[13435], 00:21:06.256 | 30.00th=[13435], 40.00th=[13435], 50.00th=[13435], 60.00th=[13566], 00:21:06.256 | 70.00th=[13566], 80.00th=[13698], 90.00th=[13829], 95.00th=[13960], 00:21:06.256 | 99.00th=[14484], 99.50th=[15008], 99.90th=[15795], 99.95th=[15795], 00:21:06.256 | 99.99th=[15795] 00:21:06.256 bw ( KiB/s): min=27648, max=28416, per=33.29%, avg=28254.32, stdev=321.68, samples=19 00:21:06.256 iops : min= 216, max= 222, avg=220.74, stdev= 2.51, samples=19 00:21:06.256 lat (msec) : 10=0.14%, 20=99.86% 00:21:06.256 cpu : usr=91.33%, sys=7.83%, ctx=98, majf=0, minf=0 00:21:06.256 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:06.256 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.256 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.256 issued rwts: total=2211,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.256 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:06.256 filename0: (groupid=0, jobs=1): err= 0: pid=83884: Fri Nov 15 10:39:04 2024 00:21:06.256 read: IOPS=221, BW=27.6MiB/s (29.0MB/s)(276MiB/10001msec) 00:21:06.256 slat (nsec): min=7742, max=44991, avg=11665.69, stdev=4860.03 00:21:06.256 clat (usec): min=6698, max=14985, avg=13539.46, stdev=327.96 00:21:06.256 lat (usec): min=6706, max=14998, avg=13551.13, stdev=327.80 00:21:06.256 clat percentiles (usec): 00:21:06.256 | 1.00th=[13304], 5.00th=[13435], 10.00th=[13435], 20.00th=[13435], 00:21:06.256 | 30.00th=[13435], 40.00th=[13435], 50.00th=[13435], 60.00th=[13566], 00:21:06.256 | 70.00th=[13566], 80.00th=[13698], 90.00th=[13829], 95.00th=[13960], 00:21:06.256 | 99.00th=[14353], 99.50th=[14615], 99.90th=[15008], 99.95th=[15008], 00:21:06.256 | 99.99th=[15008] 00:21:06.256 bw ( KiB/s): min=27648, max=28416, per=33.34%, avg=28291.74, stdev=286.68, samples=19 00:21:06.256 iops : min= 216, max= 222, avg=221.00, stdev= 2.24, samples=19 00:21:06.256 lat (msec) : 10=0.14%, 20=99.86% 00:21:06.256 cpu : usr=91.87%, sys=7.54%, ctx=13, majf=0, minf=0 00:21:06.256 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:06.256 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.256 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.256 issued rwts: total=2211,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.256 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:06.256 filename0: (groupid=0, jobs=1): err= 0: pid=83885: Fri Nov 15 10:39:04 2024 00:21:06.256 read: IOPS=221, BW=27.6MiB/s (29.0MB/s)(276MiB/10004msec) 00:21:06.256 slat (nsec): min=7818, max=42159, avg=12134.64, stdev=4610.00 00:21:06.256 clat (usec): min=9439, max=15548, avg=13543.64, stdev=276.05 00:21:06.256 lat (usec): min=9448, max=15562, avg=13555.78, stdev=276.03 00:21:06.256 clat percentiles (usec): 00:21:06.256 | 1.00th=[13304], 5.00th=[13435], 10.00th=[13435], 20.00th=[13435], 00:21:06.256 | 30.00th=[13435], 40.00th=[13435], 50.00th=[13435], 60.00th=[13435], 00:21:06.256 | 70.00th=[13566], 80.00th=[13698], 90.00th=[13829], 95.00th=[13960], 00:21:06.256 | 99.00th=[14484], 99.50th=[14746], 99.90th=[15533], 99.95th=[15533], 00:21:06.256 | 99.99th=[15533] 00:21:06.256 bw ( KiB/s): min=27648, max=28416, per=33.34%, avg=28294.74, stdev=287.72, samples=19 00:21:06.256 iops : min= 216, max= 222, avg=221.05, stdev= 2.25, samples=19 00:21:06.256 lat (msec) : 10=0.14%, 20=99.86% 00:21:06.256 cpu : usr=91.65%, sys=7.75%, ctx=33, majf=0, minf=0 00:21:06.256 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:06.256 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.256 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.256 issued rwts: total=2211,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.256 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:06.256 00:21:06.256 Run status group 0 (all jobs): 00:21:06.256 READ: bw=82.9MiB/s (86.9MB/s), 27.6MiB/s-27.6MiB/s (29.0MB/s-29.0MB/s), io=829MiB (869MB), run=10001-10005msec 00:21:06.256 10:39:05 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:21:06.256 10:39:05 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:21:06.256 10:39:05 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:21:06.256 10:39:05 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:06.256 10:39:05 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:21:06.256 10:39:05 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:06.256 10:39:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.256 10:39:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:06.256 10:39:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.256 10:39:05 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:06.256 10:39:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.256 10:39:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:06.256 10:39:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.256 00:21:06.256 real 0m11.032s 00:21:06.256 user 0m28.172s 00:21:06.256 sys 0m2.574s 00:21:06.256 10:39:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:06.256 10:39:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:06.256 ************************************ 00:21:06.257 END TEST fio_dif_digest 00:21:06.257 ************************************ 00:21:06.257 10:39:05 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:21:06.257 10:39:05 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:21:06.257 10:39:05 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:06.257 10:39:05 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:21:06.257 10:39:05 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:06.257 10:39:05 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:21:06.257 10:39:05 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:06.257 10:39:05 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:06.257 rmmod nvme_tcp 00:21:06.257 rmmod nvme_fabrics 00:21:06.257 rmmod nvme_keyring 00:21:06.257 10:39:05 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:06.257 10:39:05 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:21:06.257 10:39:05 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:21:06.257 10:39:05 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 83131 ']' 00:21:06.257 10:39:05 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 83131 00:21:06.257 10:39:05 nvmf_dif -- common/autotest_common.sh@952 -- # '[' -z 83131 ']' 00:21:06.257 10:39:05 nvmf_dif -- common/autotest_common.sh@956 -- # kill -0 83131 00:21:06.257 10:39:05 nvmf_dif -- common/autotest_common.sh@957 -- # uname 00:21:06.257 10:39:05 nvmf_dif -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:06.257 10:39:05 nvmf_dif -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83131 00:21:06.257 10:39:05 nvmf_dif -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:06.257 10:39:05 nvmf_dif -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:06.257 killing process with pid 83131 00:21:06.257 10:39:05 nvmf_dif -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83131' 00:21:06.257 10:39:05 nvmf_dif -- common/autotest_common.sh@971 -- # kill 83131 00:21:06.257 10:39:05 nvmf_dif -- common/autotest_common.sh@976 -- # wait 83131 00:21:06.257 10:39:05 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:21:06.257 10:39:05 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:06.257 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:06.257 Waiting for block devices as requested 00:21:06.257 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:06.257 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:06.257 10:39:06 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:06.257 10:39:06 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:06.257 10:39:06 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:21:06.257 10:39:06 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:21:06.257 10:39:06 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:06.257 10:39:06 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:21:06.257 10:39:06 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:06.257 10:39:06 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:06.257 10:39:06 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:06.257 10:39:06 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:06.257 10:39:06 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:06.257 10:39:06 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:06.257 10:39:06 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:06.257 10:39:06 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:06.257 10:39:06 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:06.257 10:39:06 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:06.257 10:39:06 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:06.257 10:39:06 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:06.257 10:39:06 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:06.257 10:39:06 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:06.257 10:39:06 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:06.257 10:39:06 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:06.257 10:39:06 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:06.257 10:39:06 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:06.257 10:39:06 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:06.257 10:39:06 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:21:06.257 00:21:06.257 real 0m59.931s 00:21:06.257 user 3m47.638s 00:21:06.257 sys 0m20.241s 00:21:06.257 10:39:06 nvmf_dif -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:06.257 10:39:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:06.257 ************************************ 00:21:06.257 END TEST nvmf_dif 00:21:06.257 ************************************ 00:21:06.257 10:39:06 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:21:06.257 10:39:06 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:21:06.257 10:39:06 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:06.257 10:39:06 -- common/autotest_common.sh@10 -- # set +x 00:21:06.257 ************************************ 00:21:06.257 START TEST nvmf_abort_qd_sizes 00:21:06.257 ************************************ 00:21:06.257 10:39:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:21:06.257 * Looking for test storage... 00:21:06.257 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:06.257 10:39:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:06.257 10:39:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lcov --version 00:21:06.257 10:39:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:06.257 10:39:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:06.257 10:39:06 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:06.257 10:39:06 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:06.257 10:39:06 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:06.257 10:39:06 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:21:06.257 10:39:06 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:21:06.257 10:39:06 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:21:06.257 10:39:06 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:21:06.257 10:39:06 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:21:06.257 10:39:06 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:21:06.257 10:39:06 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:21:06.257 10:39:06 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:06.257 10:39:06 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:21:06.257 10:39:06 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:21:06.257 10:39:06 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:06.257 10:39:06 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:06.257 10:39:06 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:21:06.257 10:39:06 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:21:06.257 10:39:06 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:06.257 10:39:06 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:21:06.257 10:39:06 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:21:06.257 10:39:06 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:21:06.257 10:39:06 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:21:06.257 10:39:06 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:06.257 10:39:06 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:21:06.257 10:39:06 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:21:06.257 10:39:06 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:06.257 10:39:06 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:06.257 10:39:06 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:21:06.257 10:39:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:06.257 10:39:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:06.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:06.257 --rc genhtml_branch_coverage=1 00:21:06.257 --rc genhtml_function_coverage=1 00:21:06.257 --rc genhtml_legend=1 00:21:06.257 --rc geninfo_all_blocks=1 00:21:06.257 --rc geninfo_unexecuted_blocks=1 00:21:06.257 00:21:06.257 ' 00:21:06.257 10:39:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:06.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:06.257 --rc genhtml_branch_coverage=1 00:21:06.257 --rc genhtml_function_coverage=1 00:21:06.257 --rc genhtml_legend=1 00:21:06.257 --rc geninfo_all_blocks=1 00:21:06.257 --rc geninfo_unexecuted_blocks=1 00:21:06.257 00:21:06.257 ' 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:06.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:06.258 --rc genhtml_branch_coverage=1 00:21:06.258 --rc genhtml_function_coverage=1 00:21:06.258 --rc genhtml_legend=1 00:21:06.258 --rc geninfo_all_blocks=1 00:21:06.258 --rc geninfo_unexecuted_blocks=1 00:21:06.258 00:21:06.258 ' 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:06.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:06.258 --rc genhtml_branch_coverage=1 00:21:06.258 --rc genhtml_function_coverage=1 00:21:06.258 --rc genhtml_legend=1 00:21:06.258 --rc geninfo_all_blocks=1 00:21:06.258 --rc geninfo_unexecuted_blocks=1 00:21:06.258 00:21:06.258 ' 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:06.258 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:06.258 Cannot find device "nvmf_init_br" 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:06.258 Cannot find device "nvmf_init_br2" 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:06.258 Cannot find device "nvmf_tgt_br" 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:06.258 Cannot find device "nvmf_tgt_br2" 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:06.258 Cannot find device "nvmf_init_br" 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:06.258 Cannot find device "nvmf_init_br2" 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:06.258 Cannot find device "nvmf_tgt_br" 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:06.258 Cannot find device "nvmf_tgt_br2" 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:06.258 Cannot find device "nvmf_br" 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:06.258 Cannot find device "nvmf_init_if" 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:06.258 Cannot find device "nvmf_init_if2" 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:21:06.258 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:06.259 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:06.259 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:21:06.259 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:06.259 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:06.259 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:21:06.259 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:06.259 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:06.259 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:06.259 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:06.259 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:06.259 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:06.259 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:06.259 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:06.259 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:06.259 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:06.259 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:06.259 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:06.259 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:06.259 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:06.259 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:06.259 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:06.259 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:06.259 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:06.259 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:06.259 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:06.259 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:06.259 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:06.259 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:06.259 10:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:06.259 10:39:07 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:06.259 10:39:07 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:06.259 10:39:07 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:06.259 10:39:07 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:06.259 10:39:07 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:06.259 10:39:07 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:06.259 10:39:07 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:06.259 10:39:07 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:06.259 10:39:07 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:06.259 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:06.259 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:21:06.259 00:21:06.259 --- 10.0.0.3 ping statistics --- 00:21:06.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:06.259 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:21:06.259 10:39:07 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:06.259 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:06.259 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:21:06.259 00:21:06.259 --- 10.0.0.4 ping statistics --- 00:21:06.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:06.259 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:21:06.259 10:39:07 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:06.259 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:06.259 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:21:06.259 00:21:06.259 --- 10.0.0.1 ping statistics --- 00:21:06.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:06.259 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:21:06.259 10:39:07 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:06.259 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:06.259 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:21:06.259 00:21:06.259 --- 10.0.0.2 ping statistics --- 00:21:06.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:06.259 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:21:06.259 10:39:07 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:06.259 10:39:07 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 00:21:06.259 10:39:07 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:21:06.259 10:39:07 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:07.196 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:07.196 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:07.196 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:07.196 10:39:07 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:07.196 10:39:07 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:07.196 10:39:07 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:07.196 10:39:07 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:07.196 10:39:07 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:07.196 10:39:07 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:07.196 10:39:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:21:07.196 10:39:07 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:07.196 10:39:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:07.196 10:39:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:07.196 10:39:07 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=84536 00:21:07.196 10:39:07 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:21:07.196 10:39:07 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 84536 00:21:07.196 10:39:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # '[' -z 84536 ']' 00:21:07.196 10:39:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:07.196 10:39:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:07.196 10:39:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:07.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:07.196 10:39:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:07.196 10:39:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:07.196 [2024-11-15 10:39:07.992331] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:21:07.196 [2024-11-15 10:39:07.992455] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:07.454 [2024-11-15 10:39:08.146717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:07.454 [2024-11-15 10:39:08.215277] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:07.454 [2024-11-15 10:39:08.215355] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:07.454 [2024-11-15 10:39:08.215369] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:07.454 [2024-11-15 10:39:08.215380] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:07.454 [2024-11-15 10:39:08.215389] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:07.454 [2024-11-15 10:39:08.216544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:07.454 [2024-11-15 10:39:08.216685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:07.454 [2024-11-15 10:39:08.216818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:07.454 [2024-11-15 10:39:08.216823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:07.454 [2024-11-15 10:39:08.275152] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:08.392 10:39:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:08.392 10:39:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@866 -- # return 0 00:21:08.392 10:39:09 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:08.392 10:39:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:08.392 10:39:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:08.392 10:39:09 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:08.392 10:39:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:21:08.392 10:39:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:21:08.392 10:39:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:21:08.392 10:39:09 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:21:08.392 10:39:09 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:21:08.392 10:39:09 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:21:08.392 10:39:09 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:21:08.392 10:39:09 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:21:08.392 10:39:09 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:21:08.392 10:39:09 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:21:08.392 10:39:09 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:21:08.392 10:39:09 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:21:08.392 10:39:09 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:21:08.392 10:39:09 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:21:08.392 10:39:09 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:21:08.392 10:39:09 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:21:08.392 10:39:09 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:21:08.392 10:39:09 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:21:08.392 10:39:09 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:21:08.392 10:39:09 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:21:08.392 10:39:09 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:21:08.392 10:39:09 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:21:08.392 10:39:09 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:21:08.392 10:39:09 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:21:08.392 10:39:09 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:21:08.392 10:39:09 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:08.392 10:39:09 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:21:08.392 10:39:09 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:21:08.392 10:39:09 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:21:08.392 10:39:09 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:21:08.392 10:39:09 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:21:08.392 10:39:09 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:21:08.392 10:39:09 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:08.392 10:39:09 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:21:08.392 10:39:09 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:21:08.392 10:39:09 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:21:08.392 10:39:09 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:21:08.392 10:39:09 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:21:08.392 10:39:09 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:21:08.392 10:39:09 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:21:08.392 10:39:09 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:21:08.392 10:39:09 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:21:08.392 10:39:09 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:21:08.392 10:39:09 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:21:08.392 10:39:09 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:21:08.392 10:39:09 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:21:08.392 10:39:09 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:21:08.392 10:39:09 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:21:08.392 10:39:09 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:21:08.392 10:39:09 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:21:08.392 10:39:09 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:21:08.392 10:39:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:21:08.392 10:39:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:21:08.392 10:39:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:21:08.392 10:39:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:21:08.392 10:39:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:08.392 10:39:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:08.392 ************************************ 00:21:08.392 START TEST spdk_target_abort 00:21:08.392 ************************************ 00:21:08.392 10:39:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1127 -- # spdk_target 00:21:08.392 10:39:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:21:08.392 10:39:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:21:08.392 10:39:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.392 10:39:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:08.392 spdk_targetn1 00:21:08.392 10:39:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.392 10:39:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:08.392 10:39:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.392 10:39:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:08.392 [2024-11-15 10:39:09.164749] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:08.392 10:39:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.392 10:39:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:21:08.392 10:39:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.392 10:39:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:08.392 10:39:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.392 10:39:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:21:08.392 10:39:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.392 10:39:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:08.392 10:39:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.392 10:39:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:21:08.392 10:39:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.392 10:39:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:08.392 [2024-11-15 10:39:09.205565] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:08.392 10:39:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.392 10:39:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:21:08.392 10:39:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:21:08.393 10:39:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:21:08.393 10:39:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:21:08.393 10:39:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:21:08.393 10:39:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:21:08.393 10:39:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:21:08.393 10:39:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:21:08.393 10:39:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:21:08.393 10:39:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:08.393 10:39:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:21:08.393 10:39:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:08.393 10:39:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:21:08.393 10:39:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:08.393 10:39:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:21:08.393 10:39:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:08.393 10:39:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:21:08.393 10:39:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:08.393 10:39:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:08.393 10:39:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:08.393 10:39:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:11.728 Initializing NVMe Controllers 00:21:11.728 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:21:11.728 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:11.728 Initialization complete. Launching workers. 00:21:11.728 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10453, failed: 0 00:21:11.728 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1022, failed to submit 9431 00:21:11.728 success 759, unsuccessful 263, failed 0 00:21:11.728 10:39:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:11.728 10:39:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:15.014 Initializing NVMe Controllers 00:21:15.014 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:21:15.014 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:15.014 Initialization complete. Launching workers. 00:21:15.014 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8915, failed: 0 00:21:15.014 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1134, failed to submit 7781 00:21:15.014 success 384, unsuccessful 750, failed 0 00:21:15.014 10:39:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:15.014 10:39:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:18.397 Initializing NVMe Controllers 00:21:18.397 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:21:18.397 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:18.397 Initialization complete. Launching workers. 00:21:18.397 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31628, failed: 0 00:21:18.397 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2324, failed to submit 29304 00:21:18.397 success 454, unsuccessful 1870, failed 0 00:21:18.397 10:39:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:21:18.397 10:39:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.397 10:39:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:18.397 10:39:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.397 10:39:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:21:18.397 10:39:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.397 10:39:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:18.964 10:39:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.964 10:39:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 84536 00:21:18.964 10:39:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # '[' -z 84536 ']' 00:21:18.965 10:39:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # kill -0 84536 00:21:18.965 10:39:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # uname 00:21:18.965 10:39:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:18.965 10:39:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84536 00:21:18.965 10:39:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:18.965 10:39:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:18.965 10:39:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84536' 00:21:18.965 killing process with pid 84536 00:21:18.965 10:39:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@971 -- # kill 84536 00:21:18.965 10:39:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@976 -- # wait 84536 00:21:19.224 00:21:19.224 real 0m10.824s 00:21:19.224 user 0m43.889s 00:21:19.224 sys 0m2.110s 00:21:19.224 10:39:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:19.224 10:39:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:19.224 ************************************ 00:21:19.224 END TEST spdk_target_abort 00:21:19.224 ************************************ 00:21:19.224 10:39:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:21:19.224 10:39:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:21:19.224 10:39:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:19.224 10:39:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:19.224 ************************************ 00:21:19.224 START TEST kernel_target_abort 00:21:19.224 ************************************ 00:21:19.224 10:39:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1127 -- # kernel_target 00:21:19.224 10:39:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:21:19.224 10:39:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:21:19.224 10:39:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:19.224 10:39:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:19.224 10:39:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:19.224 10:39:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:19.224 10:39:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:19.224 10:39:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:19.224 10:39:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:19.224 10:39:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:19.224 10:39:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:19.224 10:39:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:21:19.224 10:39:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:21:19.224 10:39:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:21:19.224 10:39:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:19.224 10:39:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:19.224 10:39:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:21:19.224 10:39:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:21:19.224 10:39:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:21:19.224 10:39:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:21:19.224 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:21:19.224 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:19.483 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:19.742 Waiting for block devices as requested 00:21:19.742 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:19.742 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:20.001 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:20.001 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:21:20.001 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:21:20.001 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:21:20.001 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:21:20.001 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:21:20.001 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:21:20.001 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:21:20.001 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:21:20.001 No valid GPT data, bailing 00:21:20.001 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:21:20.001 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:21:20.001 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:21:20.001 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:21:20.001 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:20.001 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:21:20.001 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:21:20.001 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:21:20.001 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:21:20.001 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:21:20.001 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:21:20.001 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:21:20.001 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:21:20.001 No valid GPT data, bailing 00:21:20.001 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:21:20.001 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:21:20.001 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:21:20.001 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:21:20.001 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:20.001 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:21:20.001 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:21:20.001 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:21:20.001 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:21:20.001 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:21:20.001 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:21:20.001 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:21:20.002 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:21:20.002 No valid GPT data, bailing 00:21:20.002 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:21:20.002 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:21:20.002 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:21:20.002 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:21:20.002 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:20.002 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:21:20.002 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:21:20.002 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:21:20.002 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:21:20.002 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:21:20.002 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:21:20.002 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:21:20.002 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:21:20.260 No valid GPT data, bailing 00:21:20.260 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:21:20.260 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:21:20.260 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:21:20.260 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:21:20.260 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:21:20.260 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:20.260 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:20.260 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:21:20.260 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:21:20.260 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:21:20.260 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:21:20.260 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:21:20.260 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:21:20.260 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:21:20.260 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:21:20.260 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:21:20.260 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:21:20.261 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 --hostid=b4733420-cf17-49bc-adb6-f89fe6fa7a33 -a 10.0.0.1 -t tcp -s 4420 00:21:20.261 00:21:20.261 Discovery Log Number of Records 2, Generation counter 2 00:21:20.261 =====Discovery Log Entry 0====== 00:21:20.261 trtype: tcp 00:21:20.261 adrfam: ipv4 00:21:20.261 subtype: current discovery subsystem 00:21:20.261 treq: not specified, sq flow control disable supported 00:21:20.261 portid: 1 00:21:20.261 trsvcid: 4420 00:21:20.261 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:21:20.261 traddr: 10.0.0.1 00:21:20.261 eflags: none 00:21:20.261 sectype: none 00:21:20.261 =====Discovery Log Entry 1====== 00:21:20.261 trtype: tcp 00:21:20.261 adrfam: ipv4 00:21:20.261 subtype: nvme subsystem 00:21:20.261 treq: not specified, sq flow control disable supported 00:21:20.261 portid: 1 00:21:20.261 trsvcid: 4420 00:21:20.261 subnqn: nqn.2016-06.io.spdk:testnqn 00:21:20.261 traddr: 10.0.0.1 00:21:20.261 eflags: none 00:21:20.261 sectype: none 00:21:20.261 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:21:20.261 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:21:20.261 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:21:20.261 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:21:20.261 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:21:20.261 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:21:20.261 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:21:20.261 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:21:20.261 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:21:20.261 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:20.261 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:21:20.261 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:20.261 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:21:20.261 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:20.261 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:21:20.261 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:20.261 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:21:20.261 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:20.261 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:20.261 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:20.261 10:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:23.552 Initializing NVMe Controllers 00:21:23.552 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:23.552 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:23.552 Initialization complete. Launching workers. 00:21:23.552 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 34540, failed: 0 00:21:23.552 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 34540, failed to submit 0 00:21:23.552 success 0, unsuccessful 34540, failed 0 00:21:23.552 10:39:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:23.552 10:39:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:26.840 Initializing NVMe Controllers 00:21:26.840 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:26.840 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:26.840 Initialization complete. Launching workers. 00:21:26.840 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 71281, failed: 0 00:21:26.840 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 31707, failed to submit 39574 00:21:26.840 success 0, unsuccessful 31707, failed 0 00:21:26.840 10:39:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:26.840 10:39:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:30.127 Initializing NVMe Controllers 00:21:30.127 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:30.127 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:30.127 Initialization complete. Launching workers. 00:21:30.127 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 83144, failed: 0 00:21:30.127 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 20792, failed to submit 62352 00:21:30.127 success 0, unsuccessful 20792, failed 0 00:21:30.127 10:39:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:21:30.127 10:39:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:21:30.127 10:39:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:21:30.127 10:39:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:30.127 10:39:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:30.127 10:39:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:21:30.127 10:39:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:30.127 10:39:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:21:30.127 10:39:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:21:30.127 10:39:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:30.695 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:32.598 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:32.598 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:32.598 00:21:32.598 real 0m13.168s 00:21:32.598 user 0m6.493s 00:21:32.598 sys 0m4.151s 00:21:32.598 10:39:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:32.598 10:39:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:32.598 ************************************ 00:21:32.598 END TEST kernel_target_abort 00:21:32.598 ************************************ 00:21:32.598 10:39:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:32.598 10:39:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:21:32.598 10:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:32.598 10:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:21:32.598 10:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:32.598 10:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:21:32.598 10:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:32.598 10:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:32.598 rmmod nvme_tcp 00:21:32.598 rmmod nvme_fabrics 00:21:32.598 rmmod nvme_keyring 00:21:32.598 10:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:32.598 10:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:21:32.598 10:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:21:32.598 10:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 84536 ']' 00:21:32.598 10:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 84536 00:21:32.598 10:39:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # '[' -z 84536 ']' 00:21:32.598 10:39:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@956 -- # kill -0 84536 00:21:32.598 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (84536) - No such process 00:21:32.598 Process with pid 84536 is not found 00:21:32.598 10:39:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@979 -- # echo 'Process with pid 84536 is not found' 00:21:32.598 10:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:21:32.598 10:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:32.856 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:32.856 Waiting for block devices as requested 00:21:32.856 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:33.115 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:33.115 10:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:33.115 10:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:33.115 10:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:21:33.115 10:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:21:33.115 10:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:33.115 10:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:21:33.115 10:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:33.115 10:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:33.115 10:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:33.115 10:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:33.115 10:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:33.115 10:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:33.115 10:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:33.115 10:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:33.115 10:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:33.374 10:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:33.374 10:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:33.374 10:39:34 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:33.374 10:39:34 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:33.374 10:39:34 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:33.374 10:39:34 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:33.374 10:39:34 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:33.374 10:39:34 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:33.374 10:39:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:33.374 10:39:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:33.374 10:39:34 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:21:33.374 00:21:33.374 real 0m27.670s 00:21:33.374 user 0m51.707s 00:21:33.374 sys 0m7.778s 00:21:33.374 10:39:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:33.374 ************************************ 00:21:33.374 END TEST nvmf_abort_qd_sizes 00:21:33.374 ************************************ 00:21:33.374 10:39:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:33.374 10:39:34 -- spdk/autotest.sh@288 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:21:33.374 10:39:34 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:21:33.374 10:39:34 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:33.374 10:39:34 -- common/autotest_common.sh@10 -- # set +x 00:21:33.374 ************************************ 00:21:33.374 START TEST keyring_file 00:21:33.374 ************************************ 00:21:33.374 10:39:34 keyring_file -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:21:33.635 * Looking for test storage... 00:21:33.635 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:21:33.635 10:39:34 keyring_file -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:33.635 10:39:34 keyring_file -- common/autotest_common.sh@1691 -- # lcov --version 00:21:33.635 10:39:34 keyring_file -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:33.635 10:39:34 keyring_file -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:33.635 10:39:34 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:33.635 10:39:34 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:33.635 10:39:34 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:33.635 10:39:34 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:21:33.635 10:39:34 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:21:33.635 10:39:34 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:21:33.635 10:39:34 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:21:33.635 10:39:34 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:21:33.635 10:39:34 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:21:33.635 10:39:34 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:21:33.636 10:39:34 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:33.636 10:39:34 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:21:33.636 10:39:34 keyring_file -- scripts/common.sh@345 -- # : 1 00:21:33.636 10:39:34 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:33.636 10:39:34 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:33.636 10:39:34 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:21:33.636 10:39:34 keyring_file -- scripts/common.sh@353 -- # local d=1 00:21:33.636 10:39:34 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:33.636 10:39:34 keyring_file -- scripts/common.sh@355 -- # echo 1 00:21:33.636 10:39:34 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:21:33.636 10:39:34 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:21:33.636 10:39:34 keyring_file -- scripts/common.sh@353 -- # local d=2 00:21:33.636 10:39:34 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:33.636 10:39:34 keyring_file -- scripts/common.sh@355 -- # echo 2 00:21:33.636 10:39:34 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:21:33.636 10:39:34 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:33.636 10:39:34 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:33.636 10:39:34 keyring_file -- scripts/common.sh@368 -- # return 0 00:21:33.636 10:39:34 keyring_file -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:33.636 10:39:34 keyring_file -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:33.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:33.636 --rc genhtml_branch_coverage=1 00:21:33.636 --rc genhtml_function_coverage=1 00:21:33.636 --rc genhtml_legend=1 00:21:33.636 --rc geninfo_all_blocks=1 00:21:33.636 --rc geninfo_unexecuted_blocks=1 00:21:33.636 00:21:33.636 ' 00:21:33.636 10:39:34 keyring_file -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:33.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:33.636 --rc genhtml_branch_coverage=1 00:21:33.636 --rc genhtml_function_coverage=1 00:21:33.636 --rc genhtml_legend=1 00:21:33.636 --rc geninfo_all_blocks=1 00:21:33.636 --rc geninfo_unexecuted_blocks=1 00:21:33.636 00:21:33.636 ' 00:21:33.636 10:39:34 keyring_file -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:33.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:33.636 --rc genhtml_branch_coverage=1 00:21:33.636 --rc genhtml_function_coverage=1 00:21:33.636 --rc genhtml_legend=1 00:21:33.636 --rc geninfo_all_blocks=1 00:21:33.636 --rc geninfo_unexecuted_blocks=1 00:21:33.636 00:21:33.636 ' 00:21:33.636 10:39:34 keyring_file -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:33.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:33.636 --rc genhtml_branch_coverage=1 00:21:33.636 --rc genhtml_function_coverage=1 00:21:33.636 --rc genhtml_legend=1 00:21:33.636 --rc geninfo_all_blocks=1 00:21:33.636 --rc geninfo_unexecuted_blocks=1 00:21:33.636 00:21:33.636 ' 00:21:33.636 10:39:34 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:21:33.636 10:39:34 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:33.636 10:39:34 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:21:33.636 10:39:34 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:33.636 10:39:34 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:33.636 10:39:34 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:33.636 10:39:34 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:33.636 10:39:34 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:33.636 10:39:34 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:33.636 10:39:34 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:33.636 10:39:34 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:33.636 10:39:34 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:33.636 10:39:34 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:33.636 10:39:34 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:21:33.636 10:39:34 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:21:33.636 10:39:34 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:33.636 10:39:34 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:33.636 10:39:34 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:33.636 10:39:34 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:33.636 10:39:34 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:33.636 10:39:34 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:21:33.636 10:39:34 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:33.636 10:39:34 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:33.636 10:39:34 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:33.636 10:39:34 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.636 10:39:34 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.636 10:39:34 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.636 10:39:34 keyring_file -- paths/export.sh@5 -- # export PATH 00:21:33.636 10:39:34 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.636 10:39:34 keyring_file -- nvmf/common.sh@51 -- # : 0 00:21:33.636 10:39:34 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:33.636 10:39:34 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:33.636 10:39:34 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:33.636 10:39:34 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:33.636 10:39:34 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:33.636 10:39:34 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:33.636 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:33.636 10:39:34 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:33.636 10:39:34 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:33.636 10:39:34 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:33.636 10:39:34 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:21:33.636 10:39:34 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:21:33.636 10:39:34 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:21:33.636 10:39:34 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:21:33.636 10:39:34 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:21:33.636 10:39:34 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:21:33.636 10:39:34 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:21:33.636 10:39:34 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:33.636 10:39:34 keyring_file -- keyring/common.sh@17 -- # name=key0 00:21:33.636 10:39:34 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:33.636 10:39:34 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:33.636 10:39:34 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:33.636 10:39:34 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.OIdx1cAod6 00:21:33.636 10:39:34 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:33.636 10:39:34 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:33.636 10:39:34 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:21:33.636 10:39:34 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:33.636 10:39:34 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:21:33.636 10:39:34 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:21:33.636 10:39:34 keyring_file -- nvmf/common.sh@733 -- # python - 00:21:33.895 10:39:34 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.OIdx1cAod6 00:21:33.895 10:39:34 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.OIdx1cAod6 00:21:33.895 10:39:34 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.OIdx1cAod6 00:21:33.895 10:39:34 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:21:33.895 10:39:34 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:33.895 10:39:34 keyring_file -- keyring/common.sh@17 -- # name=key1 00:21:33.895 10:39:34 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:21:33.895 10:39:34 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:33.895 10:39:34 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:33.895 10:39:34 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.pOQLBdpxHh 00:21:33.895 10:39:34 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:21:33.895 10:39:34 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:21:33.895 10:39:34 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:21:33.895 10:39:34 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:33.895 10:39:34 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:21:33.895 10:39:34 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:21:33.895 10:39:34 keyring_file -- nvmf/common.sh@733 -- # python - 00:21:33.895 10:39:34 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.pOQLBdpxHh 00:21:33.895 10:39:34 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.pOQLBdpxHh 00:21:33.895 10:39:34 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.pOQLBdpxHh 00:21:33.895 10:39:34 keyring_file -- keyring/file.sh@30 -- # tgtpid=85454 00:21:33.895 10:39:34 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:33.895 10:39:34 keyring_file -- keyring/file.sh@32 -- # waitforlisten 85454 00:21:33.895 10:39:34 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 85454 ']' 00:21:33.895 10:39:34 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:33.895 10:39:34 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:33.895 10:39:34 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:33.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:33.895 10:39:34 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:33.895 10:39:34 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:33.895 [2024-11-15 10:39:34.658828] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:21:33.895 [2024-11-15 10:39:34.659899] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85454 ] 00:21:34.153 [2024-11-15 10:39:34.807414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:34.153 [2024-11-15 10:39:34.864999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:34.153 [2024-11-15 10:39:34.939528] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:34.412 10:39:35 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:34.412 10:39:35 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:21:34.412 10:39:35 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:21:34.412 10:39:35 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.412 10:39:35 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:34.412 [2024-11-15 10:39:35.151568] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:34.412 null0 00:21:34.412 [2024-11-15 10:39:35.183552] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:34.412 [2024-11-15 10:39:35.183778] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:21:34.412 10:39:35 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.412 10:39:35 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:34.412 10:39:35 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:21:34.412 10:39:35 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:34.412 10:39:35 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:34.412 10:39:35 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:34.412 10:39:35 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:34.412 10:39:35 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:34.412 10:39:35 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:34.412 10:39:35 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.412 10:39:35 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:34.412 [2024-11-15 10:39:35.215524] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:21:34.412 request: 00:21:34.412 { 00:21:34.412 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:21:34.412 "secure_channel": false, 00:21:34.412 "listen_address": { 00:21:34.412 "trtype": "tcp", 00:21:34.412 "traddr": "127.0.0.1", 00:21:34.412 "trsvcid": "4420" 00:21:34.412 }, 00:21:34.412 "method": "nvmf_subsystem_add_listener", 00:21:34.412 "req_id": 1 00:21:34.412 } 00:21:34.412 Got JSON-RPC error response 00:21:34.412 response: 00:21:34.412 { 00:21:34.412 "code": -32602, 00:21:34.412 "message": "Invalid parameters" 00:21:34.412 } 00:21:34.412 10:39:35 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:34.412 10:39:35 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:21:34.412 10:39:35 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:34.412 10:39:35 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:34.412 10:39:35 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:34.412 10:39:35 keyring_file -- keyring/file.sh@47 -- # bperfpid=85464 00:21:34.412 10:39:35 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:21:34.412 10:39:35 keyring_file -- keyring/file.sh@49 -- # waitforlisten 85464 /var/tmp/bperf.sock 00:21:34.412 10:39:35 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 85464 ']' 00:21:34.412 10:39:35 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:34.412 10:39:35 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:34.412 10:39:35 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:34.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:34.412 10:39:35 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:34.412 10:39:35 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:34.670 [2024-11-15 10:39:35.284375] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:21:34.670 [2024-11-15 10:39:35.284695] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85464 ] 00:21:34.670 [2024-11-15 10:39:35.438072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:34.670 [2024-11-15 10:39:35.505560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:34.928 [2024-11-15 10:39:35.565126] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:34.928 10:39:35 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:34.928 10:39:35 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:21:34.928 10:39:35 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.OIdx1cAod6 00:21:34.928 10:39:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.OIdx1cAod6 00:21:35.187 10:39:35 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.pOQLBdpxHh 00:21:35.187 10:39:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.pOQLBdpxHh 00:21:35.445 10:39:36 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:21:35.445 10:39:36 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:21:35.445 10:39:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:35.445 10:39:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:35.445 10:39:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:36.013 10:39:36 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.OIdx1cAod6 == \/\t\m\p\/\t\m\p\.\O\I\d\x\1\c\A\o\d\6 ]] 00:21:36.013 10:39:36 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:21:36.013 10:39:36 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:21:36.013 10:39:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:36.013 10:39:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:36.013 10:39:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:36.272 10:39:36 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.pOQLBdpxHh == \/\t\m\p\/\t\m\p\.\p\O\Q\L\B\d\p\x\H\h ]] 00:21:36.272 10:39:36 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:21:36.272 10:39:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:36.272 10:39:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:36.272 10:39:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:36.272 10:39:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:36.272 10:39:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:36.531 10:39:37 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:21:36.531 10:39:37 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:21:36.531 10:39:37 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:36.531 10:39:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:36.531 10:39:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:36.531 10:39:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:36.531 10:39:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:36.790 10:39:37 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:21:36.790 10:39:37 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:36.790 10:39:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:37.049 [2024-11-15 10:39:37.667348] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:37.049 nvme0n1 00:21:37.049 10:39:37 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:21:37.049 10:39:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:37.049 10:39:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:37.049 10:39:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:37.049 10:39:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:37.049 10:39:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:37.308 10:39:38 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:21:37.308 10:39:38 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:21:37.308 10:39:38 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:37.308 10:39:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:37.308 10:39:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:37.308 10:39:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:37.308 10:39:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:37.566 10:39:38 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:21:37.566 10:39:38 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:37.825 Running I/O for 1 seconds... 00:21:38.762 11302.00 IOPS, 44.15 MiB/s 00:21:38.762 Latency(us) 00:21:38.762 [2024-11-15T10:39:39.615Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:38.762 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:21:38.762 nvme0n1 : 1.01 11352.95 44.35 0.00 0.00 11240.94 4140.68 16920.20 00:21:38.762 [2024-11-15T10:39:39.615Z] =================================================================================================================== 00:21:38.762 [2024-11-15T10:39:39.615Z] Total : 11352.95 44.35 0.00 0.00 11240.94 4140.68 16920.20 00:21:38.762 { 00:21:38.762 "results": [ 00:21:38.762 { 00:21:38.762 "job": "nvme0n1", 00:21:38.762 "core_mask": "0x2", 00:21:38.762 "workload": "randrw", 00:21:38.762 "percentage": 50, 00:21:38.762 "status": "finished", 00:21:38.762 "queue_depth": 128, 00:21:38.762 "io_size": 4096, 00:21:38.762 "runtime": 1.006963, 00:21:38.762 "iops": 11352.94941323564, 00:21:38.762 "mibps": 44.34745864545172, 00:21:38.762 "io_failed": 0, 00:21:38.762 "io_timeout": 0, 00:21:38.762 "avg_latency_us": 11240.942899675552, 00:21:38.762 "min_latency_us": 4140.683636363637, 00:21:38.762 "max_latency_us": 16920.203636363636 00:21:38.762 } 00:21:38.762 ], 00:21:38.762 "core_count": 1 00:21:38.762 } 00:21:38.762 10:39:39 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:38.762 10:39:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:39.022 10:39:39 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:21:39.022 10:39:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:39.022 10:39:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:39.022 10:39:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:39.022 10:39:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:39.022 10:39:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:39.280 10:39:40 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:21:39.281 10:39:40 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:21:39.281 10:39:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:39.281 10:39:40 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:39.281 10:39:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:39.281 10:39:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:39.281 10:39:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:39.539 10:39:40 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:21:39.539 10:39:40 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:39.539 10:39:40 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:21:39.539 10:39:40 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:39.539 10:39:40 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:21:39.539 10:39:40 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:39.539 10:39:40 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:21:39.539 10:39:40 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:39.539 10:39:40 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:39.539 10:39:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:39.798 [2024-11-15 10:39:40.639935] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:39.798 [2024-11-15 10:39:40.640090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa405d0 (107): Transport endpoint is not connected 00:21:39.798 [2024-11-15 10:39:40.641073] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa405d0 (9): Bad file descriptor 00:21:39.798 [2024-11-15 10:39:40.642059] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:21:39.798 [2024-11-15 10:39:40.642265] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:21:39.798 [2024-11-15 10:39:40.642283] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:21:39.798 [2024-11-15 10:39:40.642296] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:21:39.798 request: 00:21:39.798 { 00:21:39.798 "name": "nvme0", 00:21:39.798 "trtype": "tcp", 00:21:39.798 "traddr": "127.0.0.1", 00:21:39.798 "adrfam": "ipv4", 00:21:39.798 "trsvcid": "4420", 00:21:39.798 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:39.798 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:39.798 "prchk_reftag": false, 00:21:39.798 "prchk_guard": false, 00:21:39.798 "hdgst": false, 00:21:39.798 "ddgst": false, 00:21:39.798 "psk": "key1", 00:21:39.798 "allow_unrecognized_csi": false, 00:21:39.798 "method": "bdev_nvme_attach_controller", 00:21:39.798 "req_id": 1 00:21:39.798 } 00:21:39.798 Got JSON-RPC error response 00:21:39.798 response: 00:21:39.798 { 00:21:39.798 "code": -5, 00:21:39.798 "message": "Input/output error" 00:21:39.798 } 00:21:40.057 10:39:40 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:21:40.057 10:39:40 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:40.057 10:39:40 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:40.057 10:39:40 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:40.057 10:39:40 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:21:40.057 10:39:40 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:40.057 10:39:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:40.057 10:39:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:40.057 10:39:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:40.057 10:39:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:40.316 10:39:41 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:21:40.316 10:39:41 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:21:40.316 10:39:41 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:40.316 10:39:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:40.316 10:39:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:40.316 10:39:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:40.316 10:39:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:40.575 10:39:41 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:21:40.575 10:39:41 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:21:40.575 10:39:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:40.834 10:39:41 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:21:40.834 10:39:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:21:41.093 10:39:41 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:21:41.093 10:39:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:41.093 10:39:41 keyring_file -- keyring/file.sh@78 -- # jq length 00:21:41.661 10:39:42 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:21:41.661 10:39:42 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.OIdx1cAod6 00:21:41.661 10:39:42 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.OIdx1cAod6 00:21:41.661 10:39:42 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:21:41.661 10:39:42 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.OIdx1cAod6 00:21:41.661 10:39:42 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:21:41.661 10:39:42 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:41.661 10:39:42 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:21:41.661 10:39:42 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:41.661 10:39:42 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.OIdx1cAod6 00:21:41.661 10:39:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.OIdx1cAod6 00:21:41.661 [2024-11-15 10:39:42.438377] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.OIdx1cAod6': 0100660 00:21:41.661 [2024-11-15 10:39:42.438448] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:41.661 request: 00:21:41.661 { 00:21:41.661 "name": "key0", 00:21:41.661 "path": "/tmp/tmp.OIdx1cAod6", 00:21:41.661 "method": "keyring_file_add_key", 00:21:41.661 "req_id": 1 00:21:41.661 } 00:21:41.661 Got JSON-RPC error response 00:21:41.661 response: 00:21:41.661 { 00:21:41.661 "code": -1, 00:21:41.661 "message": "Operation not permitted" 00:21:41.661 } 00:21:41.661 10:39:42 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:21:41.661 10:39:42 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:41.661 10:39:42 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:41.661 10:39:42 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:41.661 10:39:42 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.OIdx1cAod6 00:21:41.661 10:39:42 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.OIdx1cAod6 00:21:41.661 10:39:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.OIdx1cAod6 00:21:41.921 10:39:42 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.OIdx1cAod6 00:21:41.921 10:39:42 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:21:41.921 10:39:42 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:41.921 10:39:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:41.921 10:39:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:41.921 10:39:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:41.921 10:39:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:42.489 10:39:43 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:21:42.489 10:39:43 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:42.489 10:39:43 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:21:42.489 10:39:43 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:42.489 10:39:43 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:21:42.489 10:39:43 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:42.489 10:39:43 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:21:42.489 10:39:43 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:42.489 10:39:43 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:42.489 10:39:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:42.489 [2024-11-15 10:39:43.294576] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.OIdx1cAod6': No such file or directory 00:21:42.489 [2024-11-15 10:39:43.294658] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:21:42.489 [2024-11-15 10:39:43.294696] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:21:42.489 [2024-11-15 10:39:43.294706] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:21:42.489 [2024-11-15 10:39:43.294716] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:42.489 [2024-11-15 10:39:43.294724] bdev_nvme.c:6669:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:21:42.489 request: 00:21:42.489 { 00:21:42.489 "name": "nvme0", 00:21:42.489 "trtype": "tcp", 00:21:42.489 "traddr": "127.0.0.1", 00:21:42.489 "adrfam": "ipv4", 00:21:42.489 "trsvcid": "4420", 00:21:42.489 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:42.489 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:42.489 "prchk_reftag": false, 00:21:42.489 "prchk_guard": false, 00:21:42.489 "hdgst": false, 00:21:42.489 "ddgst": false, 00:21:42.489 "psk": "key0", 00:21:42.489 "allow_unrecognized_csi": false, 00:21:42.489 "method": "bdev_nvme_attach_controller", 00:21:42.489 "req_id": 1 00:21:42.489 } 00:21:42.489 Got JSON-RPC error response 00:21:42.489 response: 00:21:42.489 { 00:21:42.489 "code": -19, 00:21:42.489 "message": "No such device" 00:21:42.489 } 00:21:42.489 10:39:43 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:21:42.489 10:39:43 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:42.489 10:39:43 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:42.489 10:39:43 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:42.489 10:39:43 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:21:42.489 10:39:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:42.749 10:39:43 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:21:42.749 10:39:43 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:42.749 10:39:43 keyring_file -- keyring/common.sh@17 -- # name=key0 00:21:42.749 10:39:43 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:42.749 10:39:43 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:42.749 10:39:43 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:42.749 10:39:43 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.KTAQUbMwGl 00:21:42.749 10:39:43 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:42.749 10:39:43 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:42.749 10:39:43 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:21:42.749 10:39:43 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:42.749 10:39:43 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:21:42.749 10:39:43 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:21:42.749 10:39:43 keyring_file -- nvmf/common.sh@733 -- # python - 00:21:43.014 10:39:43 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.KTAQUbMwGl 00:21:43.014 10:39:43 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.KTAQUbMwGl 00:21:43.014 10:39:43 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.KTAQUbMwGl 00:21:43.014 10:39:43 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.KTAQUbMwGl 00:21:43.014 10:39:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.KTAQUbMwGl 00:21:43.288 10:39:43 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:43.288 10:39:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:43.546 nvme0n1 00:21:43.546 10:39:44 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:21:43.546 10:39:44 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:43.546 10:39:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:43.547 10:39:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:43.547 10:39:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:43.547 10:39:44 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:43.805 10:39:44 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:21:43.805 10:39:44 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:21:43.806 10:39:44 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:44.064 10:39:44 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:21:44.064 10:39:44 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:21:44.064 10:39:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:44.064 10:39:44 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:44.064 10:39:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:44.323 10:39:45 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:21:44.323 10:39:45 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:21:44.323 10:39:45 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:44.323 10:39:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:44.323 10:39:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:44.323 10:39:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:44.323 10:39:45 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:44.583 10:39:45 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:21:44.583 10:39:45 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:44.583 10:39:45 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:45.153 10:39:45 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:21:45.153 10:39:45 keyring_file -- keyring/file.sh@105 -- # jq length 00:21:45.153 10:39:45 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:45.153 10:39:46 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:21:45.153 10:39:46 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.KTAQUbMwGl 00:21:45.153 10:39:46 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.KTAQUbMwGl 00:21:45.721 10:39:46 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.pOQLBdpxHh 00:21:45.721 10:39:46 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.pOQLBdpxHh 00:21:45.721 10:39:46 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:45.721 10:39:46 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:46.290 nvme0n1 00:21:46.290 10:39:46 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:21:46.290 10:39:46 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:21:46.550 10:39:47 keyring_file -- keyring/file.sh@113 -- # config='{ 00:21:46.550 "subsystems": [ 00:21:46.550 { 00:21:46.550 "subsystem": "keyring", 00:21:46.550 "config": [ 00:21:46.550 { 00:21:46.550 "method": "keyring_file_add_key", 00:21:46.550 "params": { 00:21:46.550 "name": "key0", 00:21:46.550 "path": "/tmp/tmp.KTAQUbMwGl" 00:21:46.550 } 00:21:46.550 }, 00:21:46.550 { 00:21:46.550 "method": "keyring_file_add_key", 00:21:46.550 "params": { 00:21:46.550 "name": "key1", 00:21:46.550 "path": "/tmp/tmp.pOQLBdpxHh" 00:21:46.550 } 00:21:46.550 } 00:21:46.550 ] 00:21:46.550 }, 00:21:46.550 { 00:21:46.550 "subsystem": "iobuf", 00:21:46.550 "config": [ 00:21:46.550 { 00:21:46.550 "method": "iobuf_set_options", 00:21:46.550 "params": { 00:21:46.550 "small_pool_count": 8192, 00:21:46.550 "large_pool_count": 1024, 00:21:46.550 "small_bufsize": 8192, 00:21:46.550 "large_bufsize": 135168, 00:21:46.550 "enable_numa": false 00:21:46.550 } 00:21:46.550 } 00:21:46.550 ] 00:21:46.550 }, 00:21:46.550 { 00:21:46.550 "subsystem": "sock", 00:21:46.550 "config": [ 00:21:46.550 { 00:21:46.550 "method": "sock_set_default_impl", 00:21:46.550 "params": { 00:21:46.550 "impl_name": "uring" 00:21:46.550 } 00:21:46.550 }, 00:21:46.550 { 00:21:46.550 "method": "sock_impl_set_options", 00:21:46.550 "params": { 00:21:46.550 "impl_name": "ssl", 00:21:46.550 "recv_buf_size": 4096, 00:21:46.550 "send_buf_size": 4096, 00:21:46.550 "enable_recv_pipe": true, 00:21:46.550 "enable_quickack": false, 00:21:46.550 "enable_placement_id": 0, 00:21:46.550 "enable_zerocopy_send_server": true, 00:21:46.550 "enable_zerocopy_send_client": false, 00:21:46.550 "zerocopy_threshold": 0, 00:21:46.550 "tls_version": 0, 00:21:46.550 "enable_ktls": false 00:21:46.550 } 00:21:46.550 }, 00:21:46.550 { 00:21:46.550 "method": "sock_impl_set_options", 00:21:46.550 "params": { 00:21:46.550 "impl_name": "posix", 00:21:46.550 "recv_buf_size": 2097152, 00:21:46.550 "send_buf_size": 2097152, 00:21:46.550 "enable_recv_pipe": true, 00:21:46.550 "enable_quickack": false, 00:21:46.550 "enable_placement_id": 0, 00:21:46.550 "enable_zerocopy_send_server": true, 00:21:46.550 "enable_zerocopy_send_client": false, 00:21:46.550 "zerocopy_threshold": 0, 00:21:46.550 "tls_version": 0, 00:21:46.550 "enable_ktls": false 00:21:46.550 } 00:21:46.550 }, 00:21:46.550 { 00:21:46.550 "method": "sock_impl_set_options", 00:21:46.550 "params": { 00:21:46.550 "impl_name": "uring", 00:21:46.550 "recv_buf_size": 2097152, 00:21:46.550 "send_buf_size": 2097152, 00:21:46.550 "enable_recv_pipe": true, 00:21:46.550 "enable_quickack": false, 00:21:46.550 "enable_placement_id": 0, 00:21:46.550 "enable_zerocopy_send_server": false, 00:21:46.550 "enable_zerocopy_send_client": false, 00:21:46.550 "zerocopy_threshold": 0, 00:21:46.550 "tls_version": 0, 00:21:46.550 "enable_ktls": false 00:21:46.550 } 00:21:46.550 } 00:21:46.550 ] 00:21:46.550 }, 00:21:46.550 { 00:21:46.550 "subsystem": "vmd", 00:21:46.550 "config": [] 00:21:46.550 }, 00:21:46.550 { 00:21:46.550 "subsystem": "accel", 00:21:46.550 "config": [ 00:21:46.550 { 00:21:46.550 "method": "accel_set_options", 00:21:46.550 "params": { 00:21:46.550 "small_cache_size": 128, 00:21:46.550 "large_cache_size": 16, 00:21:46.550 "task_count": 2048, 00:21:46.550 "sequence_count": 2048, 00:21:46.551 "buf_count": 2048 00:21:46.551 } 00:21:46.551 } 00:21:46.551 ] 00:21:46.551 }, 00:21:46.551 { 00:21:46.551 "subsystem": "bdev", 00:21:46.551 "config": [ 00:21:46.551 { 00:21:46.551 "method": "bdev_set_options", 00:21:46.551 "params": { 00:21:46.551 "bdev_io_pool_size": 65535, 00:21:46.551 "bdev_io_cache_size": 256, 00:21:46.551 "bdev_auto_examine": true, 00:21:46.551 "iobuf_small_cache_size": 128, 00:21:46.551 "iobuf_large_cache_size": 16 00:21:46.551 } 00:21:46.551 }, 00:21:46.551 { 00:21:46.551 "method": "bdev_raid_set_options", 00:21:46.551 "params": { 00:21:46.551 "process_window_size_kb": 1024, 00:21:46.551 "process_max_bandwidth_mb_sec": 0 00:21:46.551 } 00:21:46.551 }, 00:21:46.551 { 00:21:46.551 "method": "bdev_iscsi_set_options", 00:21:46.551 "params": { 00:21:46.551 "timeout_sec": 30 00:21:46.551 } 00:21:46.551 }, 00:21:46.551 { 00:21:46.551 "method": "bdev_nvme_set_options", 00:21:46.551 "params": { 00:21:46.551 "action_on_timeout": "none", 00:21:46.551 "timeout_us": 0, 00:21:46.551 "timeout_admin_us": 0, 00:21:46.551 "keep_alive_timeout_ms": 10000, 00:21:46.551 "arbitration_burst": 0, 00:21:46.551 "low_priority_weight": 0, 00:21:46.551 "medium_priority_weight": 0, 00:21:46.551 "high_priority_weight": 0, 00:21:46.551 "nvme_adminq_poll_period_us": 10000, 00:21:46.551 "nvme_ioq_poll_period_us": 0, 00:21:46.551 "io_queue_requests": 512, 00:21:46.551 "delay_cmd_submit": true, 00:21:46.551 "transport_retry_count": 4, 00:21:46.551 "bdev_retry_count": 3, 00:21:46.551 "transport_ack_timeout": 0, 00:21:46.551 "ctrlr_loss_timeout_sec": 0, 00:21:46.551 "reconnect_delay_sec": 0, 00:21:46.551 "fast_io_fail_timeout_sec": 0, 00:21:46.551 "disable_auto_failback": false, 00:21:46.551 "generate_uuids": false, 00:21:46.551 "transport_tos": 0, 00:21:46.551 "nvme_error_stat": false, 00:21:46.551 "rdma_srq_size": 0, 00:21:46.551 "io_path_stat": false, 00:21:46.551 "allow_accel_sequence": false, 00:21:46.551 "rdma_max_cq_size": 0, 00:21:46.551 "rdma_cm_event_timeout_ms": 0, 00:21:46.551 "dhchap_digests": [ 00:21:46.551 "sha256", 00:21:46.551 "sha384", 00:21:46.551 "sha512" 00:21:46.551 ], 00:21:46.551 "dhchap_dhgroups": [ 00:21:46.551 "null", 00:21:46.551 "ffdhe2048", 00:21:46.551 "ffdhe3072", 00:21:46.551 "ffdhe4096", 00:21:46.551 "ffdhe6144", 00:21:46.551 "ffdhe8192" 00:21:46.551 ] 00:21:46.551 } 00:21:46.551 }, 00:21:46.551 { 00:21:46.551 "method": "bdev_nvme_attach_controller", 00:21:46.551 "params": { 00:21:46.551 "name": "nvme0", 00:21:46.551 "trtype": "TCP", 00:21:46.551 "adrfam": "IPv4", 00:21:46.551 "traddr": "127.0.0.1", 00:21:46.551 "trsvcid": "4420", 00:21:46.551 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:46.551 "prchk_reftag": false, 00:21:46.551 "prchk_guard": false, 00:21:46.551 "ctrlr_loss_timeout_sec": 0, 00:21:46.551 "reconnect_delay_sec": 0, 00:21:46.551 "fast_io_fail_timeout_sec": 0, 00:21:46.551 "psk": "key0", 00:21:46.551 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:46.551 "hdgst": false, 00:21:46.551 "ddgst": false, 00:21:46.551 "multipath": "multipath" 00:21:46.551 } 00:21:46.551 }, 00:21:46.551 { 00:21:46.551 "method": "bdev_nvme_set_hotplug", 00:21:46.551 "params": { 00:21:46.551 "period_us": 100000, 00:21:46.551 "enable": false 00:21:46.551 } 00:21:46.551 }, 00:21:46.551 { 00:21:46.551 "method": "bdev_wait_for_examine" 00:21:46.551 } 00:21:46.551 ] 00:21:46.551 }, 00:21:46.551 { 00:21:46.551 "subsystem": "nbd", 00:21:46.551 "config": [] 00:21:46.551 } 00:21:46.551 ] 00:21:46.551 }' 00:21:46.551 10:39:47 keyring_file -- keyring/file.sh@115 -- # killprocess 85464 00:21:46.551 10:39:47 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 85464 ']' 00:21:46.551 10:39:47 keyring_file -- common/autotest_common.sh@956 -- # kill -0 85464 00:21:46.551 10:39:47 keyring_file -- common/autotest_common.sh@957 -- # uname 00:21:46.551 10:39:47 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:46.551 10:39:47 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85464 00:21:46.551 killing process with pid 85464 00:21:46.551 Received shutdown signal, test time was about 1.000000 seconds 00:21:46.551 00:21:46.551 Latency(us) 00:21:46.551 [2024-11-15T10:39:47.404Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:46.551 [2024-11-15T10:39:47.404Z] =================================================================================================================== 00:21:46.551 [2024-11-15T10:39:47.404Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:46.551 10:39:47 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:46.551 10:39:47 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:46.551 10:39:47 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85464' 00:21:46.551 10:39:47 keyring_file -- common/autotest_common.sh@971 -- # kill 85464 00:21:46.551 10:39:47 keyring_file -- common/autotest_common.sh@976 -- # wait 85464 00:21:46.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:46.811 10:39:47 keyring_file -- keyring/file.sh@118 -- # bperfpid=85718 00:21:46.811 10:39:47 keyring_file -- keyring/file.sh@120 -- # waitforlisten 85718 /var/tmp/bperf.sock 00:21:46.811 10:39:47 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 85718 ']' 00:21:46.811 10:39:47 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:46.811 10:39:47 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:46.811 10:39:47 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:21:46.811 10:39:47 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:46.811 10:39:47 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:21:46.811 "subsystems": [ 00:21:46.811 { 00:21:46.811 "subsystem": "keyring", 00:21:46.811 "config": [ 00:21:46.811 { 00:21:46.811 "method": "keyring_file_add_key", 00:21:46.811 "params": { 00:21:46.811 "name": "key0", 00:21:46.811 "path": "/tmp/tmp.KTAQUbMwGl" 00:21:46.811 } 00:21:46.811 }, 00:21:46.811 { 00:21:46.811 "method": "keyring_file_add_key", 00:21:46.811 "params": { 00:21:46.811 "name": "key1", 00:21:46.811 "path": "/tmp/tmp.pOQLBdpxHh" 00:21:46.811 } 00:21:46.811 } 00:21:46.811 ] 00:21:46.811 }, 00:21:46.811 { 00:21:46.811 "subsystem": "iobuf", 00:21:46.811 "config": [ 00:21:46.811 { 00:21:46.811 "method": "iobuf_set_options", 00:21:46.811 "params": { 00:21:46.811 "small_pool_count": 8192, 00:21:46.811 "large_pool_count": 1024, 00:21:46.811 "small_bufsize": 8192, 00:21:46.811 "large_bufsize": 135168, 00:21:46.811 "enable_numa": false 00:21:46.811 } 00:21:46.811 } 00:21:46.811 ] 00:21:46.811 }, 00:21:46.811 { 00:21:46.811 "subsystem": "sock", 00:21:46.811 "config": [ 00:21:46.811 { 00:21:46.811 "method": "sock_set_default_impl", 00:21:46.811 "params": { 00:21:46.811 "impl_name": "uring" 00:21:46.811 } 00:21:46.811 }, 00:21:46.811 { 00:21:46.811 "method": "sock_impl_set_options", 00:21:46.811 "params": { 00:21:46.811 "impl_name": "ssl", 00:21:46.811 "recv_buf_size": 4096, 00:21:46.811 "send_buf_size": 4096, 00:21:46.811 "enable_recv_pipe": true, 00:21:46.811 "enable_quickack": false, 00:21:46.811 "enable_placement_id": 0, 00:21:46.811 "enable_zerocopy_send_server": true, 00:21:46.811 "enable_zerocopy_send_client": false, 00:21:46.811 "zerocopy_threshold": 0, 00:21:46.811 "tls_version": 0, 00:21:46.811 "enable_ktls": false 00:21:46.811 } 00:21:46.811 }, 00:21:46.811 { 00:21:46.811 "method": "sock_impl_set_options", 00:21:46.811 "params": { 00:21:46.811 "impl_name": "posix", 00:21:46.811 "recv_buf_size": 2097152, 00:21:46.811 "send_buf_size": 2097152, 00:21:46.811 "enable_recv_pipe": true, 00:21:46.811 "enable_quickack": false, 00:21:46.811 "enable_placement_id": 0, 00:21:46.811 "enable_zerocopy_send_server": true, 00:21:46.811 "enable_zerocopy_send_client": false, 00:21:46.811 "zerocopy_threshold": 0, 00:21:46.811 "tls_version": 0, 00:21:46.811 "enable_ktls": false 00:21:46.811 } 00:21:46.811 }, 00:21:46.811 { 00:21:46.811 "method": "sock_impl_set_options", 00:21:46.811 "params": { 00:21:46.811 "impl_name": "uring", 00:21:46.811 "recv_buf_size": 2097152, 00:21:46.811 "send_buf_size": 2097152, 00:21:46.811 "enable_recv_pipe": true, 00:21:46.811 "enable_quickack": false, 00:21:46.811 "enable_placement_id": 0, 00:21:46.811 "enable_zerocopy_send_server": false, 00:21:46.811 "enable_zerocopy_send_client": false, 00:21:46.811 "zerocopy_threshold": 0, 00:21:46.811 "tls_version": 0, 00:21:46.811 "enable_ktls": false 00:21:46.811 } 00:21:46.811 } 00:21:46.811 ] 00:21:46.811 }, 00:21:46.811 { 00:21:46.811 "subsystem": "vmd", 00:21:46.811 "config": [] 00:21:46.811 }, 00:21:46.811 { 00:21:46.811 "subsystem": "accel", 00:21:46.811 "config": [ 00:21:46.811 { 00:21:46.811 "method": "accel_set_options", 00:21:46.811 "params": { 00:21:46.811 "small_cache_size": 128, 00:21:46.811 "large_cache_size": 16, 00:21:46.811 "task_count": 2048, 00:21:46.811 "sequence_count": 2048, 00:21:46.811 "buf_count": 2048 00:21:46.811 } 00:21:46.811 } 00:21:46.811 ] 00:21:46.811 }, 00:21:46.811 { 00:21:46.811 "subsystem": "bdev", 00:21:46.811 "config": [ 00:21:46.811 { 00:21:46.811 "method": "bdev_set_options", 00:21:46.811 "params": { 00:21:46.811 "bdev_io_pool_size": 65535, 00:21:46.811 "bdev_io_cache_size": 256, 00:21:46.811 "bdev_auto_examine": true, 00:21:46.811 "iobuf_small_cache_size": 128, 00:21:46.811 "iobuf_large_cache_size": 16 00:21:46.811 } 00:21:46.811 }, 00:21:46.811 { 00:21:46.811 "method": "bdev_raid_set_options", 00:21:46.811 "params": { 00:21:46.811 "process_window_size_kb": 1024, 00:21:46.811 "process_max_bandwidth_mb_sec": 0 00:21:46.811 } 00:21:46.811 }, 00:21:46.811 { 00:21:46.811 "method": "bdev_iscsi_set_options", 00:21:46.811 "params": { 00:21:46.811 "timeout_sec": 30 00:21:46.811 } 00:21:46.811 }, 00:21:46.811 { 00:21:46.811 "method": "bdev_nvme_set_options", 00:21:46.811 "params": { 00:21:46.811 "action_on_timeout": "none", 00:21:46.811 "timeout_us": 0, 00:21:46.811 "timeout_admin_us": 0, 00:21:46.811 "keep_alive_timeout_ms": 10000, 00:21:46.811 "arbitration_burst": 0, 00:21:46.811 "low_priority_weight": 0, 00:21:46.811 "medium_priority_weight": 0, 00:21:46.811 "high_priority_weight": 0, 00:21:46.811 "nvme_adminq_poll_period_us": 10000, 00:21:46.811 "nvme_ioq_poll_period_us": 0, 00:21:46.811 "io_queue_requests": 512, 00:21:46.811 "delay_cmd_submit": true, 00:21:46.811 "transport_retry_count": 4, 00:21:46.811 "bdev_retry_count": 3, 00:21:46.811 "transport_ack_timeout": 0, 00:21:46.811 "ctrlr_loss_timeout_sec": 0, 00:21:46.811 "reconnect_delay_sec": 0, 00:21:46.811 "fast_io_fail_timeout_sec": 0, 00:21:46.811 "disable_auto_failback": false, 00:21:46.811 "generate_uuids": false, 00:21:46.811 "transport_tos": 0, 00:21:46.811 "nvme_error_stat": false, 00:21:46.811 "rdma_srq_size": 0, 00:21:46.811 "io_path_stat": false, 00:21:46.811 "allow_accel_sequence": false, 00:21:46.811 "rdma_max_cq_size": 0, 00:21:46.811 "rdma_cm_event_timeout_ms": 0, 00:21:46.811 "dhchap_digests": [ 00:21:46.811 "sha256", 00:21:46.811 "sha384", 00:21:46.811 "sha512" 00:21:46.811 ], 00:21:46.811 "dhchap_dhgroups": [ 00:21:46.811 "null", 00:21:46.811 "ffdhe2048", 00:21:46.811 "ffdhe3072", 00:21:46.811 "ffdhe4096", 00:21:46.811 "ffdhe6144", 00:21:46.812 "ffdhe8192" 00:21:46.812 ] 00:21:46.812 } 00:21:46.812 }, 00:21:46.812 { 00:21:46.812 "method": "bdev_nvme_attach_controller", 00:21:46.812 "params": { 00:21:46.812 "name": "nvme0", 00:21:46.812 "trtype": "TCP", 00:21:46.812 "adrfam": "IPv4", 00:21:46.812 "traddr": "127.0.0.1", 00:21:46.812 "trsvcid": "4420", 00:21:46.812 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:46.812 "prchk_reftag": false, 00:21:46.812 "prchk_guard": false, 00:21:46.812 "ctrlr_loss_timeout_sec": 0, 00:21:46.812 "reconnect_delay_sec": 0, 00:21:46.812 "fast_io_fail_timeout_sec": 0, 00:21:46.812 "psk": "key0", 00:21:46.812 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:46.812 "hdgst": false, 00:21:46.812 "ddgst": false, 00:21:46.812 "multipath": "multipath" 00:21:46.812 } 00:21:46.812 }, 00:21:46.812 { 00:21:46.812 "method": "bdev_nvme_set_hotplug", 00:21:46.812 "params": { 00:21:46.812 "period_us": 100000, 00:21:46.812 "enable": false 00:21:46.812 } 00:21:46.812 }, 00:21:46.812 { 00:21:46.812 "method": "bdev_wait_for_examine" 00:21:46.812 } 00:21:46.812 ] 00:21:46.812 }, 00:21:46.812 { 00:21:46.812 "subsystem": "nbd", 00:21:46.812 "config": [] 00:21:46.812 } 00:21:46.812 ] 00:21:46.812 }' 00:21:46.812 10:39:47 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:46.812 10:39:47 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:46.812 [2024-11-15 10:39:47.513395] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:21:46.812 [2024-11-15 10:39:47.513616] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85718 ] 00:21:46.812 [2024-11-15 10:39:47.658386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:47.071 [2024-11-15 10:39:47.717352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:47.071 [2024-11-15 10:39:47.852783] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:47.071 [2024-11-15 10:39:47.912133] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:48.006 10:39:48 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:48.006 10:39:48 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:21:48.006 10:39:48 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:21:48.006 10:39:48 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:48.006 10:39:48 keyring_file -- keyring/file.sh@121 -- # jq length 00:21:48.264 10:39:48 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:21:48.264 10:39:48 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:21:48.264 10:39:48 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:48.264 10:39:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:48.264 10:39:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:48.264 10:39:48 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:48.264 10:39:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:48.522 10:39:49 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:21:48.522 10:39:49 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:21:48.522 10:39:49 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:48.522 10:39:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:48.522 10:39:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:48.522 10:39:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:48.522 10:39:49 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:48.854 10:39:49 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:21:48.855 10:39:49 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:21:48.855 10:39:49 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:21:48.855 10:39:49 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:21:49.115 10:39:49 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:21:49.115 10:39:49 keyring_file -- keyring/file.sh@1 -- # cleanup 00:21:49.115 10:39:49 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.KTAQUbMwGl /tmp/tmp.pOQLBdpxHh 00:21:49.115 10:39:49 keyring_file -- keyring/file.sh@20 -- # killprocess 85718 00:21:49.115 10:39:49 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 85718 ']' 00:21:49.115 10:39:49 keyring_file -- common/autotest_common.sh@956 -- # kill -0 85718 00:21:49.115 10:39:49 keyring_file -- common/autotest_common.sh@957 -- # uname 00:21:49.115 10:39:49 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:49.115 10:39:49 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85718 00:21:49.115 killing process with pid 85718 00:21:49.115 Received shutdown signal, test time was about 1.000000 seconds 00:21:49.115 00:21:49.115 Latency(us) 00:21:49.115 [2024-11-15T10:39:49.968Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:49.115 [2024-11-15T10:39:49.968Z] =================================================================================================================== 00:21:49.115 [2024-11-15T10:39:49.968Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:49.115 10:39:49 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:49.115 10:39:49 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:49.115 10:39:49 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85718' 00:21:49.115 10:39:49 keyring_file -- common/autotest_common.sh@971 -- # kill 85718 00:21:49.115 10:39:49 keyring_file -- common/autotest_common.sh@976 -- # wait 85718 00:21:49.373 10:39:50 keyring_file -- keyring/file.sh@21 -- # killprocess 85454 00:21:49.373 10:39:50 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 85454 ']' 00:21:49.373 10:39:50 keyring_file -- common/autotest_common.sh@956 -- # kill -0 85454 00:21:49.373 10:39:50 keyring_file -- common/autotest_common.sh@957 -- # uname 00:21:49.373 10:39:50 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:49.373 10:39:50 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85454 00:21:49.373 killing process with pid 85454 00:21:49.373 10:39:50 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:49.373 10:39:50 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:49.373 10:39:50 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85454' 00:21:49.373 10:39:50 keyring_file -- common/autotest_common.sh@971 -- # kill 85454 00:21:49.373 10:39:50 keyring_file -- common/autotest_common.sh@976 -- # wait 85454 00:21:49.632 00:21:49.632 real 0m16.252s 00:21:49.632 user 0m41.355s 00:21:49.632 sys 0m3.186s 00:21:49.632 10:39:50 keyring_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:49.632 ************************************ 00:21:49.632 END TEST keyring_file 00:21:49.632 ************************************ 00:21:49.632 10:39:50 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:49.632 10:39:50 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:21:49.632 10:39:50 -- spdk/autotest.sh@290 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:21:49.632 10:39:50 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:49.632 10:39:50 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:49.632 10:39:50 -- common/autotest_common.sh@10 -- # set +x 00:21:49.632 ************************************ 00:21:49.632 START TEST keyring_linux 00:21:49.632 ************************************ 00:21:49.632 10:39:50 keyring_linux -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:21:49.892 Joined session keyring: 729017841 00:21:49.892 * Looking for test storage... 00:21:49.892 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:21:49.892 10:39:50 keyring_linux -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:49.892 10:39:50 keyring_linux -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:49.892 10:39:50 keyring_linux -- common/autotest_common.sh@1691 -- # lcov --version 00:21:49.892 10:39:50 keyring_linux -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:49.892 10:39:50 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:49.892 10:39:50 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:49.892 10:39:50 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:49.892 10:39:50 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:21:49.892 10:39:50 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:21:49.892 10:39:50 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:21:49.892 10:39:50 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:21:49.892 10:39:50 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:21:49.892 10:39:50 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:21:49.892 10:39:50 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:21:49.892 10:39:50 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:49.892 10:39:50 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:21:49.892 10:39:50 keyring_linux -- scripts/common.sh@345 -- # : 1 00:21:49.892 10:39:50 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:49.892 10:39:50 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:49.892 10:39:50 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:21:49.892 10:39:50 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:21:49.892 10:39:50 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:49.892 10:39:50 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:21:49.892 10:39:50 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:21:49.892 10:39:50 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:21:49.892 10:39:50 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:21:49.892 10:39:50 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:49.892 10:39:50 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:21:49.892 10:39:50 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:21:49.892 10:39:50 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:49.892 10:39:50 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:49.892 10:39:50 keyring_linux -- scripts/common.sh@368 -- # return 0 00:21:49.892 10:39:50 keyring_linux -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:49.892 10:39:50 keyring_linux -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:49.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:49.892 --rc genhtml_branch_coverage=1 00:21:49.892 --rc genhtml_function_coverage=1 00:21:49.892 --rc genhtml_legend=1 00:21:49.892 --rc geninfo_all_blocks=1 00:21:49.892 --rc geninfo_unexecuted_blocks=1 00:21:49.892 00:21:49.892 ' 00:21:49.892 10:39:50 keyring_linux -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:49.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:49.892 --rc genhtml_branch_coverage=1 00:21:49.892 --rc genhtml_function_coverage=1 00:21:49.892 --rc genhtml_legend=1 00:21:49.892 --rc geninfo_all_blocks=1 00:21:49.892 --rc geninfo_unexecuted_blocks=1 00:21:49.892 00:21:49.892 ' 00:21:49.892 10:39:50 keyring_linux -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:49.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:49.892 --rc genhtml_branch_coverage=1 00:21:49.892 --rc genhtml_function_coverage=1 00:21:49.892 --rc genhtml_legend=1 00:21:49.892 --rc geninfo_all_blocks=1 00:21:49.892 --rc geninfo_unexecuted_blocks=1 00:21:49.892 00:21:49.892 ' 00:21:49.892 10:39:50 keyring_linux -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:49.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:49.892 --rc genhtml_branch_coverage=1 00:21:49.892 --rc genhtml_function_coverage=1 00:21:49.892 --rc genhtml_legend=1 00:21:49.892 --rc geninfo_all_blocks=1 00:21:49.892 --rc geninfo_unexecuted_blocks=1 00:21:49.892 00:21:49.892 ' 00:21:49.892 10:39:50 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:21:49.892 10:39:50 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:49.892 10:39:50 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:21:49.892 10:39:50 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:49.892 10:39:50 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:49.892 10:39:50 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:49.892 10:39:50 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:49.892 10:39:50 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:49.892 10:39:50 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:49.892 10:39:50 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:49.892 10:39:50 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:49.892 10:39:50 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:49.892 10:39:50 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:49.892 10:39:50 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:21:49.892 10:39:50 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=b4733420-cf17-49bc-adb6-f89fe6fa7a33 00:21:49.892 10:39:50 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:49.892 10:39:50 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:49.892 10:39:50 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:49.892 10:39:50 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:49.892 10:39:50 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:49.892 10:39:50 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:21:49.892 10:39:50 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:49.892 10:39:50 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:49.892 10:39:50 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:49.892 10:39:50 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.892 10:39:50 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.892 10:39:50 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.892 10:39:50 keyring_linux -- paths/export.sh@5 -- # export PATH 00:21:49.892 10:39:50 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.892 10:39:50 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:21:49.892 10:39:50 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:49.892 10:39:50 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:49.892 10:39:50 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:49.892 10:39:50 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:49.892 10:39:50 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:49.892 10:39:50 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:49.892 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:49.892 10:39:50 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:49.892 10:39:50 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:49.892 10:39:50 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:49.892 10:39:50 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:21:49.892 10:39:50 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:21:49.892 10:39:50 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:21:49.892 10:39:50 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:21:49.892 10:39:50 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:21:49.892 10:39:50 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:21:49.893 10:39:50 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:21:49.893 10:39:50 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:21:49.893 10:39:50 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:21:49.893 10:39:50 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:49.893 10:39:50 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:21:49.893 10:39:50 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:21:49.893 10:39:50 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:49.893 10:39:50 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:49.893 10:39:50 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:21:49.893 10:39:50 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:49.893 10:39:50 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:21:49.893 10:39:50 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:21:49.893 10:39:50 keyring_linux -- nvmf/common.sh@733 -- # python - 00:21:50.153 10:39:50 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:21:50.153 /tmp/:spdk-test:key0 00:21:50.153 10:39:50 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:21:50.153 10:39:50 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:21:50.153 10:39:50 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:21:50.153 10:39:50 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:21:50.153 10:39:50 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:21:50.153 10:39:50 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:21:50.153 10:39:50 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:21:50.153 10:39:50 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:21:50.153 10:39:50 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:21:50.153 10:39:50 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:21:50.153 10:39:50 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:50.153 10:39:50 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:21:50.153 10:39:50 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:21:50.153 10:39:50 keyring_linux -- nvmf/common.sh@733 -- # python - 00:21:50.153 10:39:50 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:21:50.153 10:39:50 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:21:50.153 /tmp/:spdk-test:key1 00:21:50.153 10:39:50 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=85845 00:21:50.153 10:39:50 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:50.153 10:39:50 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 85845 00:21:50.153 10:39:50 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 85845 ']' 00:21:50.153 10:39:50 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:50.153 10:39:50 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:50.153 10:39:50 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:50.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:50.153 10:39:50 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:50.153 10:39:50 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:50.153 [2024-11-15 10:39:50.887872] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:21:50.153 [2024-11-15 10:39:50.888162] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85845 ] 00:21:50.413 [2024-11-15 10:39:51.034537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:50.413 [2024-11-15 10:39:51.094075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:50.413 [2024-11-15 10:39:51.167621] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:50.672 10:39:51 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:50.672 10:39:51 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:21:50.672 10:39:51 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:21:50.672 10:39:51 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.672 10:39:51 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:50.672 [2024-11-15 10:39:51.380676] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:50.672 null0 00:21:50.672 [2024-11-15 10:39:51.412639] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:50.672 [2024-11-15 10:39:51.412832] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:21:50.672 10:39:51 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.672 10:39:51 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:21:50.672 539083992 00:21:50.672 10:39:51 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:21:50.672 127394927 00:21:50.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:50.672 10:39:51 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=85856 00:21:50.672 10:39:51 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 85856 /var/tmp/bperf.sock 00:21:50.672 10:39:51 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:21:50.672 10:39:51 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 85856 ']' 00:21:50.672 10:39:51 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:50.672 10:39:51 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:50.672 10:39:51 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:50.672 10:39:51 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:50.672 10:39:51 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:50.672 [2024-11-15 10:39:51.488878] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:21:50.672 [2024-11-15 10:39:51.489134] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85856 ] 00:21:50.930 [2024-11-15 10:39:51.635234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:50.930 [2024-11-15 10:39:51.701122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:50.930 10:39:51 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:50.930 10:39:51 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:21:50.930 10:39:51 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:21:50.930 10:39:51 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:21:51.498 10:39:52 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:21:51.498 10:39:52 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:51.757 [2024-11-15 10:39:52.384972] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:51.757 10:39:52 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:21:51.757 10:39:52 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:21:52.016 [2024-11-15 10:39:52.683543] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:52.016 nvme0n1 00:21:52.016 10:39:52 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:21:52.016 10:39:52 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:21:52.016 10:39:52 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:21:52.016 10:39:52 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:21:52.016 10:39:52 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:52.016 10:39:52 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:21:52.275 10:39:53 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:21:52.275 10:39:53 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:21:52.275 10:39:53 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:21:52.275 10:39:53 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:21:52.275 10:39:53 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:52.275 10:39:53 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:52.275 10:39:53 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:21:52.534 10:39:53 keyring_linux -- keyring/linux.sh@25 -- # sn=539083992 00:21:52.534 10:39:53 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:21:52.534 10:39:53 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:21:52.534 10:39:53 keyring_linux -- keyring/linux.sh@26 -- # [[ 539083992 == \5\3\9\0\8\3\9\9\2 ]] 00:21:52.534 10:39:53 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 539083992 00:21:52.534 10:39:53 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:21:52.534 10:39:53 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:52.793 Running I/O for 1 seconds... 00:21:53.729 11839.00 IOPS, 46.25 MiB/s 00:21:53.729 Latency(us) 00:21:53.729 [2024-11-15T10:39:54.582Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:53.729 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:21:53.729 nvme0n1 : 1.01 11841.27 46.25 0.00 0.00 10745.09 7208.96 19779.96 00:21:53.729 [2024-11-15T10:39:54.582Z] =================================================================================================================== 00:21:53.729 [2024-11-15T10:39:54.582Z] Total : 11841.27 46.25 0.00 0.00 10745.09 7208.96 19779.96 00:21:53.729 { 00:21:53.729 "results": [ 00:21:53.729 { 00:21:53.729 "job": "nvme0n1", 00:21:53.729 "core_mask": "0x2", 00:21:53.729 "workload": "randread", 00:21:53.729 "status": "finished", 00:21:53.729 "queue_depth": 128, 00:21:53.729 "io_size": 4096, 00:21:53.729 "runtime": 1.010702, 00:21:53.729 "iops": 11841.27467839185, 00:21:53.729 "mibps": 46.254979212468164, 00:21:53.729 "io_failed": 0, 00:21:53.729 "io_timeout": 0, 00:21:53.729 "avg_latency_us": 10745.085386485172, 00:21:53.729 "min_latency_us": 7208.96, 00:21:53.729 "max_latency_us": 19779.956363636364 00:21:53.729 } 00:21:53.729 ], 00:21:53.729 "core_count": 1 00:21:53.729 } 00:21:53.729 10:39:54 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:53.729 10:39:54 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:54.047 10:39:54 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:21:54.047 10:39:54 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:21:54.047 10:39:54 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:21:54.047 10:39:54 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:21:54.047 10:39:54 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:21:54.047 10:39:54 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:54.321 10:39:55 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:21:54.321 10:39:55 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:21:54.321 10:39:55 keyring_linux -- keyring/linux.sh@23 -- # return 00:21:54.321 10:39:55 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:54.321 10:39:55 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:21:54.321 10:39:55 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:54.321 10:39:55 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:21:54.321 10:39:55 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:54.321 10:39:55 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:21:54.321 10:39:55 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:54.321 10:39:55 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:54.321 10:39:55 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:54.580 [2024-11-15 10:39:55.338810] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:54.580 [2024-11-15 10:39:55.339432] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba85d0 (107): Transport endpoint is not connected 00:21:54.580 [2024-11-15 10:39:55.340420] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba85d0 (9): Bad file descriptor 00:21:54.580 [2024-11-15 10:39:55.341417] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:21:54.580 [2024-11-15 10:39:55.341461] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:21:54.580 [2024-11-15 10:39:55.341474] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:21:54.580 [2024-11-15 10:39:55.341485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:21:54.580 request: 00:21:54.580 { 00:21:54.580 "name": "nvme0", 00:21:54.580 "trtype": "tcp", 00:21:54.580 "traddr": "127.0.0.1", 00:21:54.580 "adrfam": "ipv4", 00:21:54.580 "trsvcid": "4420", 00:21:54.580 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:54.580 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:54.580 "prchk_reftag": false, 00:21:54.580 "prchk_guard": false, 00:21:54.580 "hdgst": false, 00:21:54.580 "ddgst": false, 00:21:54.580 "psk": ":spdk-test:key1", 00:21:54.580 "allow_unrecognized_csi": false, 00:21:54.580 "method": "bdev_nvme_attach_controller", 00:21:54.580 "req_id": 1 00:21:54.580 } 00:21:54.580 Got JSON-RPC error response 00:21:54.580 response: 00:21:54.580 { 00:21:54.580 "code": -5, 00:21:54.580 "message": "Input/output error" 00:21:54.580 } 00:21:54.580 10:39:55 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:21:54.580 10:39:55 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:54.580 10:39:55 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:54.580 10:39:55 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:54.580 10:39:55 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:21:54.580 10:39:55 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:21:54.580 10:39:55 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:21:54.580 10:39:55 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:21:54.580 10:39:55 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:21:54.580 10:39:55 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:21:54.580 10:39:55 keyring_linux -- keyring/linux.sh@33 -- # sn=539083992 00:21:54.580 10:39:55 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 539083992 00:21:54.580 1 links removed 00:21:54.580 10:39:55 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:21:54.580 10:39:55 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:21:54.580 10:39:55 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:21:54.580 10:39:55 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:21:54.580 10:39:55 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:21:54.580 10:39:55 keyring_linux -- keyring/linux.sh@33 -- # sn=127394927 00:21:54.581 10:39:55 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 127394927 00:21:54.581 1 links removed 00:21:54.581 10:39:55 keyring_linux -- keyring/linux.sh@41 -- # killprocess 85856 00:21:54.581 10:39:55 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 85856 ']' 00:21:54.581 10:39:55 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 85856 00:21:54.581 10:39:55 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:21:54.581 10:39:55 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:54.581 10:39:55 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85856 00:21:54.581 killing process with pid 85856 00:21:54.581 Received shutdown signal, test time was about 1.000000 seconds 00:21:54.581 00:21:54.581 Latency(us) 00:21:54.581 [2024-11-15T10:39:55.434Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:54.581 [2024-11-15T10:39:55.434Z] =================================================================================================================== 00:21:54.581 [2024-11-15T10:39:55.434Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:54.581 10:39:55 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:54.581 10:39:55 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:54.581 10:39:55 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85856' 00:21:54.581 10:39:55 keyring_linux -- common/autotest_common.sh@971 -- # kill 85856 00:21:54.581 10:39:55 keyring_linux -- common/autotest_common.sh@976 -- # wait 85856 00:21:54.839 10:39:55 keyring_linux -- keyring/linux.sh@42 -- # killprocess 85845 00:21:54.839 10:39:55 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 85845 ']' 00:21:54.839 10:39:55 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 85845 00:21:54.839 10:39:55 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:21:54.839 10:39:55 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:54.839 10:39:55 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85845 00:21:54.839 killing process with pid 85845 00:21:54.839 10:39:55 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:54.839 10:39:55 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:54.839 10:39:55 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85845' 00:21:54.839 10:39:55 keyring_linux -- common/autotest_common.sh@971 -- # kill 85845 00:21:54.839 10:39:55 keyring_linux -- common/autotest_common.sh@976 -- # wait 85845 00:21:55.404 00:21:55.404 real 0m5.584s 00:21:55.404 user 0m10.937s 00:21:55.404 sys 0m1.610s 00:21:55.404 10:39:56 keyring_linux -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:55.404 ************************************ 00:21:55.404 END TEST keyring_linux 00:21:55.404 ************************************ 00:21:55.404 10:39:56 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:55.404 10:39:56 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:21:55.404 10:39:56 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:21:55.404 10:39:56 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:21:55.404 10:39:56 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:21:55.404 10:39:56 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:21:55.404 10:39:56 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:21:55.404 10:39:56 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:21:55.404 10:39:56 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:21:55.404 10:39:56 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:21:55.404 10:39:56 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:21:55.404 10:39:56 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:21:55.404 10:39:56 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:21:55.404 10:39:56 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:21:55.404 10:39:56 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:21:55.404 10:39:56 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:21:55.404 10:39:56 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:21:55.404 10:39:56 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:21:55.404 10:39:56 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:55.404 10:39:56 -- common/autotest_common.sh@10 -- # set +x 00:21:55.404 10:39:56 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:21:55.404 10:39:56 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:21:55.404 10:39:56 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:21:55.404 10:39:56 -- common/autotest_common.sh@10 -- # set +x 00:21:57.305 INFO: APP EXITING 00:21:57.305 INFO: killing all VMs 00:21:57.305 INFO: killing vhost app 00:21:57.305 INFO: EXIT DONE 00:21:57.872 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:57.872 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:21:57.872 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:21:58.809 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:58.809 Cleaning 00:21:58.809 Removing: /var/run/dpdk/spdk0/config 00:21:58.809 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:21:58.809 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:21:58.809 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:21:58.809 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:21:58.809 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:21:58.809 Removing: /var/run/dpdk/spdk0/hugepage_info 00:21:58.809 Removing: /var/run/dpdk/spdk1/config 00:21:58.809 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:21:58.809 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:21:58.809 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:21:58.809 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:21:58.809 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:21:58.809 Removing: /var/run/dpdk/spdk1/hugepage_info 00:21:58.809 Removing: /var/run/dpdk/spdk2/config 00:21:58.809 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:21:58.809 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:21:58.809 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:21:58.809 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:21:58.809 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:21:58.809 Removing: /var/run/dpdk/spdk2/hugepage_info 00:21:58.809 Removing: /var/run/dpdk/spdk3/config 00:21:58.809 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:21:58.809 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:21:58.809 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:21:58.809 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:21:58.809 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:21:58.809 Removing: /var/run/dpdk/spdk3/hugepage_info 00:21:58.809 Removing: /var/run/dpdk/spdk4/config 00:21:58.809 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:21:58.809 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:21:58.809 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:21:58.809 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:21:58.809 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:21:58.809 Removing: /var/run/dpdk/spdk4/hugepage_info 00:21:58.809 Removing: /dev/shm/nvmf_trace.0 00:21:58.809 Removing: /dev/shm/spdk_tgt_trace.pid56705 00:21:58.809 Removing: /var/run/dpdk/spdk0 00:21:58.809 Removing: /var/run/dpdk/spdk1 00:21:58.809 Removing: /var/run/dpdk/spdk2 00:21:58.809 Removing: /var/run/dpdk/spdk3 00:21:58.809 Removing: /var/run/dpdk/spdk4 00:21:58.809 Removing: /var/run/dpdk/spdk_pid56552 00:21:58.809 Removing: /var/run/dpdk/spdk_pid56705 00:21:58.809 Removing: /var/run/dpdk/spdk_pid56904 00:21:58.809 Removing: /var/run/dpdk/spdk_pid56985 00:21:58.809 Removing: /var/run/dpdk/spdk_pid57018 00:21:58.809 Removing: /var/run/dpdk/spdk_pid57124 00:21:58.809 Removing: /var/run/dpdk/spdk_pid57138 00:21:58.809 Removing: /var/run/dpdk/spdk_pid57272 00:21:58.809 Removing: /var/run/dpdk/spdk_pid57473 00:21:58.809 Removing: /var/run/dpdk/spdk_pid57628 00:21:58.809 Removing: /var/run/dpdk/spdk_pid57701 00:21:58.809 Removing: /var/run/dpdk/spdk_pid57785 00:21:58.809 Removing: /var/run/dpdk/spdk_pid57884 00:21:58.809 Removing: /var/run/dpdk/spdk_pid57964 00:21:58.809 Removing: /var/run/dpdk/spdk_pid58008 00:21:58.809 Removing: /var/run/dpdk/spdk_pid58038 00:21:58.809 Removing: /var/run/dpdk/spdk_pid58107 00:21:58.809 Removing: /var/run/dpdk/spdk_pid58196 00:21:58.809 Removing: /var/run/dpdk/spdk_pid58640 00:21:58.809 Removing: /var/run/dpdk/spdk_pid58679 00:21:58.809 Removing: /var/run/dpdk/spdk_pid58723 00:21:58.810 Removing: /var/run/dpdk/spdk_pid58731 00:21:58.810 Removing: /var/run/dpdk/spdk_pid58798 00:21:58.810 Removing: /var/run/dpdk/spdk_pid58814 00:21:58.810 Removing: /var/run/dpdk/spdk_pid58881 00:21:58.810 Removing: /var/run/dpdk/spdk_pid58890 00:21:58.810 Removing: /var/run/dpdk/spdk_pid58935 00:21:58.810 Removing: /var/run/dpdk/spdk_pid58946 00:21:58.810 Removing: /var/run/dpdk/spdk_pid58991 00:21:58.810 Removing: /var/run/dpdk/spdk_pid59002 00:21:58.810 Removing: /var/run/dpdk/spdk_pid59138 00:21:58.810 Removing: /var/run/dpdk/spdk_pid59168 00:21:58.810 Removing: /var/run/dpdk/spdk_pid59256 00:21:58.810 Removing: /var/run/dpdk/spdk_pid59588 00:21:59.069 Removing: /var/run/dpdk/spdk_pid59600 00:21:59.069 Removing: /var/run/dpdk/spdk_pid59631 00:21:59.069 Removing: /var/run/dpdk/spdk_pid59650 00:21:59.069 Removing: /var/run/dpdk/spdk_pid59660 00:21:59.069 Removing: /var/run/dpdk/spdk_pid59684 00:21:59.069 Removing: /var/run/dpdk/spdk_pid59698 00:21:59.069 Removing: /var/run/dpdk/spdk_pid59719 00:21:59.069 Removing: /var/run/dpdk/spdk_pid59738 00:21:59.069 Removing: /var/run/dpdk/spdk_pid59746 00:21:59.069 Removing: /var/run/dpdk/spdk_pid59767 00:21:59.069 Removing: /var/run/dpdk/spdk_pid59786 00:21:59.069 Removing: /var/run/dpdk/spdk_pid59805 00:21:59.069 Removing: /var/run/dpdk/spdk_pid59815 00:21:59.069 Removing: /var/run/dpdk/spdk_pid59834 00:21:59.069 Removing: /var/run/dpdk/spdk_pid59853 00:21:59.069 Removing: /var/run/dpdk/spdk_pid59863 00:21:59.069 Removing: /var/run/dpdk/spdk_pid59882 00:21:59.069 Removing: /var/run/dpdk/spdk_pid59901 00:21:59.069 Removing: /var/run/dpdk/spdk_pid59922 00:21:59.069 Removing: /var/run/dpdk/spdk_pid59947 00:21:59.069 Removing: /var/run/dpdk/spdk_pid59966 00:21:59.069 Removing: /var/run/dpdk/spdk_pid60000 00:21:59.069 Removing: /var/run/dpdk/spdk_pid60062 00:21:59.069 Removing: /var/run/dpdk/spdk_pid60096 00:21:59.069 Removing: /var/run/dpdk/spdk_pid60106 00:21:59.069 Removing: /var/run/dpdk/spdk_pid60134 00:21:59.069 Removing: /var/run/dpdk/spdk_pid60144 00:21:59.069 Removing: /var/run/dpdk/spdk_pid60153 00:21:59.069 Removing: /var/run/dpdk/spdk_pid60201 00:21:59.069 Removing: /var/run/dpdk/spdk_pid60210 00:21:59.069 Removing: /var/run/dpdk/spdk_pid60244 00:21:59.069 Removing: /var/run/dpdk/spdk_pid60248 00:21:59.069 Removing: /var/run/dpdk/spdk_pid60263 00:21:59.069 Removing: /var/run/dpdk/spdk_pid60267 00:21:59.069 Removing: /var/run/dpdk/spdk_pid60282 00:21:59.069 Removing: /var/run/dpdk/spdk_pid60286 00:21:59.069 Removing: /var/run/dpdk/spdk_pid60301 00:21:59.069 Removing: /var/run/dpdk/spdk_pid60311 00:21:59.069 Removing: /var/run/dpdk/spdk_pid60339 00:21:59.069 Removing: /var/run/dpdk/spdk_pid60366 00:21:59.069 Removing: /var/run/dpdk/spdk_pid60375 00:21:59.069 Removing: /var/run/dpdk/spdk_pid60407 00:21:59.069 Removing: /var/run/dpdk/spdk_pid60415 00:21:59.069 Removing: /var/run/dpdk/spdk_pid60427 00:21:59.069 Removing: /var/run/dpdk/spdk_pid60464 00:21:59.069 Removing: /var/run/dpdk/spdk_pid60480 00:21:59.069 Removing: /var/run/dpdk/spdk_pid60507 00:21:59.069 Removing: /var/run/dpdk/spdk_pid60514 00:21:59.069 Removing: /var/run/dpdk/spdk_pid60522 00:21:59.069 Removing: /var/run/dpdk/spdk_pid60529 00:21:59.069 Removing: /var/run/dpdk/spdk_pid60537 00:21:59.069 Removing: /var/run/dpdk/spdk_pid60544 00:21:59.069 Removing: /var/run/dpdk/spdk_pid60552 00:21:59.069 Removing: /var/run/dpdk/spdk_pid60559 00:21:59.069 Removing: /var/run/dpdk/spdk_pid60641 00:21:59.069 Removing: /var/run/dpdk/spdk_pid60691 00:21:59.069 Removing: /var/run/dpdk/spdk_pid60809 00:21:59.069 Removing: /var/run/dpdk/spdk_pid60841 00:21:59.069 Removing: /var/run/dpdk/spdk_pid60882 00:21:59.069 Removing: /var/run/dpdk/spdk_pid60902 00:21:59.069 Removing: /var/run/dpdk/spdk_pid60924 00:21:59.069 Removing: /var/run/dpdk/spdk_pid60933 00:21:59.069 Removing: /var/run/dpdk/spdk_pid60970 00:21:59.069 Removing: /var/run/dpdk/spdk_pid60991 00:21:59.069 Removing: /var/run/dpdk/spdk_pid61069 00:21:59.069 Removing: /var/run/dpdk/spdk_pid61085 00:21:59.069 Removing: /var/run/dpdk/spdk_pid61129 00:21:59.069 Removing: /var/run/dpdk/spdk_pid61198 00:21:59.069 Removing: /var/run/dpdk/spdk_pid61248 00:21:59.069 Removing: /var/run/dpdk/spdk_pid61284 00:21:59.069 Removing: /var/run/dpdk/spdk_pid61384 00:21:59.069 Removing: /var/run/dpdk/spdk_pid61430 00:21:59.069 Removing: /var/run/dpdk/spdk_pid61468 00:21:59.069 Removing: /var/run/dpdk/spdk_pid61695 00:21:59.069 Removing: /var/run/dpdk/spdk_pid61792 00:21:59.069 Removing: /var/run/dpdk/spdk_pid61821 00:21:59.069 Removing: /var/run/dpdk/spdk_pid61850 00:21:59.069 Removing: /var/run/dpdk/spdk_pid61884 00:21:59.069 Removing: /var/run/dpdk/spdk_pid61917 00:21:59.069 Removing: /var/run/dpdk/spdk_pid61951 00:21:59.069 Removing: /var/run/dpdk/spdk_pid61988 00:21:59.069 Removing: /var/run/dpdk/spdk_pid62370 00:21:59.069 Removing: /var/run/dpdk/spdk_pid62408 00:21:59.069 Removing: /var/run/dpdk/spdk_pid62750 00:21:59.069 Removing: /var/run/dpdk/spdk_pid63216 00:21:59.069 Removing: /var/run/dpdk/spdk_pid63503 00:21:59.340 Removing: /var/run/dpdk/spdk_pid64347 00:21:59.340 Removing: /var/run/dpdk/spdk_pid65295 00:21:59.340 Removing: /var/run/dpdk/spdk_pid65417 00:21:59.340 Removing: /var/run/dpdk/spdk_pid65480 00:21:59.340 Removing: /var/run/dpdk/spdk_pid66922 00:21:59.340 Removing: /var/run/dpdk/spdk_pid67238 00:21:59.340 Removing: /var/run/dpdk/spdk_pid71038 00:21:59.340 Removing: /var/run/dpdk/spdk_pid71414 00:21:59.340 Removing: /var/run/dpdk/spdk_pid71519 00:21:59.340 Removing: /var/run/dpdk/spdk_pid71646 00:21:59.340 Removing: /var/run/dpdk/spdk_pid71671 00:21:59.340 Removing: /var/run/dpdk/spdk_pid71692 00:21:59.340 Removing: /var/run/dpdk/spdk_pid71719 00:21:59.340 Removing: /var/run/dpdk/spdk_pid71807 00:21:59.340 Removing: /var/run/dpdk/spdk_pid71948 00:21:59.340 Removing: /var/run/dpdk/spdk_pid72097 00:21:59.340 Removing: /var/run/dpdk/spdk_pid72186 00:21:59.340 Removing: /var/run/dpdk/spdk_pid72373 00:21:59.340 Removing: /var/run/dpdk/spdk_pid72456 00:21:59.340 Removing: /var/run/dpdk/spdk_pid72541 00:21:59.340 Removing: /var/run/dpdk/spdk_pid72901 00:21:59.340 Removing: /var/run/dpdk/spdk_pid73324 00:21:59.340 Removing: /var/run/dpdk/spdk_pid73325 00:21:59.340 Removing: /var/run/dpdk/spdk_pid73326 00:21:59.340 Removing: /var/run/dpdk/spdk_pid73592 00:21:59.341 Removing: /var/run/dpdk/spdk_pid73848 00:21:59.341 Removing: /var/run/dpdk/spdk_pid74232 00:21:59.341 Removing: /var/run/dpdk/spdk_pid74240 00:21:59.341 Removing: /var/run/dpdk/spdk_pid74569 00:21:59.341 Removing: /var/run/dpdk/spdk_pid74589 00:21:59.341 Removing: /var/run/dpdk/spdk_pid74603 00:21:59.341 Removing: /var/run/dpdk/spdk_pid74630 00:21:59.341 Removing: /var/run/dpdk/spdk_pid74645 00:21:59.341 Removing: /var/run/dpdk/spdk_pid75006 00:21:59.341 Removing: /var/run/dpdk/spdk_pid75049 00:21:59.341 Removing: /var/run/dpdk/spdk_pid75383 00:21:59.341 Removing: /var/run/dpdk/spdk_pid75586 00:21:59.341 Removing: /var/run/dpdk/spdk_pid76014 00:21:59.341 Removing: /var/run/dpdk/spdk_pid76576 00:21:59.341 Removing: /var/run/dpdk/spdk_pid77476 00:21:59.341 Removing: /var/run/dpdk/spdk_pid78118 00:21:59.341 Removing: /var/run/dpdk/spdk_pid78120 00:21:59.341 Removing: /var/run/dpdk/spdk_pid80156 00:21:59.341 Removing: /var/run/dpdk/spdk_pid80218 00:21:59.341 Removing: /var/run/dpdk/spdk_pid80279 00:21:59.341 Removing: /var/run/dpdk/spdk_pid80332 00:21:59.341 Removing: /var/run/dpdk/spdk_pid80453 00:21:59.341 Removing: /var/run/dpdk/spdk_pid80505 00:21:59.341 Removing: /var/run/dpdk/spdk_pid80558 00:21:59.341 Removing: /var/run/dpdk/spdk_pid80619 00:21:59.341 Removing: /var/run/dpdk/spdk_pid80974 00:21:59.341 Removing: /var/run/dpdk/spdk_pid82190 00:21:59.341 Removing: /var/run/dpdk/spdk_pid82342 00:21:59.341 Removing: /var/run/dpdk/spdk_pid82587 00:21:59.341 Removing: /var/run/dpdk/spdk_pid83186 00:21:59.341 Removing: /var/run/dpdk/spdk_pid83348 00:21:59.341 Removing: /var/run/dpdk/spdk_pid83505 00:21:59.341 Removing: /var/run/dpdk/spdk_pid83602 00:21:59.341 Removing: /var/run/dpdk/spdk_pid83759 00:21:59.341 Removing: /var/run/dpdk/spdk_pid83868 00:21:59.341 Removing: /var/run/dpdk/spdk_pid84593 00:21:59.341 Removing: /var/run/dpdk/spdk_pid84628 00:21:59.341 Removing: /var/run/dpdk/spdk_pid84665 00:21:59.341 Removing: /var/run/dpdk/spdk_pid84918 00:21:59.341 Removing: /var/run/dpdk/spdk_pid84949 00:21:59.341 Removing: /var/run/dpdk/spdk_pid84983 00:21:59.341 Removing: /var/run/dpdk/spdk_pid85454 00:21:59.341 Removing: /var/run/dpdk/spdk_pid85464 00:21:59.341 Removing: /var/run/dpdk/spdk_pid85718 00:21:59.341 Removing: /var/run/dpdk/spdk_pid85845 00:21:59.341 Removing: /var/run/dpdk/spdk_pid85856 00:21:59.341 Clean 00:21:59.636 10:40:00 -- common/autotest_common.sh@1451 -- # return 0 00:21:59.636 10:40:00 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:21:59.636 10:40:00 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:59.636 10:40:00 -- common/autotest_common.sh@10 -- # set +x 00:21:59.636 10:40:00 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:21:59.636 10:40:00 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:59.636 10:40:00 -- common/autotest_common.sh@10 -- # set +x 00:21:59.636 10:40:00 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:59.636 10:40:00 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:21:59.636 10:40:00 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:21:59.636 10:40:00 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:21:59.636 10:40:00 -- spdk/autotest.sh@394 -- # hostname 00:21:59.636 10:40:00 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:21:59.896 geninfo: WARNING: invalid characters removed from testname! 00:22:26.446 10:40:26 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:29.733 10:40:30 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:33.020 10:40:33 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:35.567 10:40:36 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:38.102 10:40:38 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:41.439 10:40:41 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:43.969 10:40:44 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:22:43.970 10:40:44 -- spdk/autorun.sh@1 -- $ timing_finish 00:22:43.970 10:40:44 -- common/autotest_common.sh@736 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:22:43.970 10:40:44 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:22:43.970 10:40:44 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:22:43.970 10:40:44 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:43.970 + [[ -n 5200 ]] 00:22:43.970 + sudo kill 5200 00:22:44.008 [Pipeline] } 00:22:44.024 [Pipeline] // timeout 00:22:44.028 [Pipeline] } 00:22:44.038 [Pipeline] // stage 00:22:44.042 [Pipeline] } 00:22:44.052 [Pipeline] // catchError 00:22:44.059 [Pipeline] stage 00:22:44.061 [Pipeline] { (Stop VM) 00:22:44.071 [Pipeline] sh 00:22:44.347 + vagrant halt 00:22:48.535 ==> default: Halting domain... 00:22:53.814 [Pipeline] sh 00:22:54.093 + vagrant destroy -f 00:22:58.286 ==> default: Removing domain... 00:22:58.299 [Pipeline] sh 00:22:58.588 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:22:58.596 [Pipeline] } 00:22:58.612 [Pipeline] // stage 00:22:58.619 [Pipeline] } 00:22:58.635 [Pipeline] // dir 00:22:58.640 [Pipeline] } 00:22:58.655 [Pipeline] // wrap 00:22:58.660 [Pipeline] } 00:22:58.674 [Pipeline] // catchError 00:22:58.687 [Pipeline] stage 00:22:58.689 [Pipeline] { (Epilogue) 00:22:58.703 [Pipeline] sh 00:22:58.984 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:23:05.557 [Pipeline] catchError 00:23:05.560 [Pipeline] { 00:23:05.571 [Pipeline] sh 00:23:05.850 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:23:05.850 Artifacts sizes are good 00:23:05.859 [Pipeline] } 00:23:05.874 [Pipeline] // catchError 00:23:05.888 [Pipeline] archiveArtifacts 00:23:05.896 Archiving artifacts 00:23:06.031 [Pipeline] cleanWs 00:23:06.060 [WS-CLEANUP] Deleting project workspace... 00:23:06.060 [WS-CLEANUP] Deferred wipeout is used... 00:23:06.071 [WS-CLEANUP] done 00:23:06.074 [Pipeline] } 00:23:06.091 [Pipeline] // stage 00:23:06.097 [Pipeline] } 00:23:06.112 [Pipeline] // node 00:23:06.118 [Pipeline] End of Pipeline 00:23:06.156 Finished: SUCCESS